Sample records for sedibud metadata database

  1. The I.A.G. / A.I.G. SEDIBUD (Sediment Budgets in Cold Environments) Programme: Current and future activities

    NASA Astrophysics Data System (ADS)

    Beylich, Achim A.; Lamoureux, Scott; Decaulne, Armelle

    2013-04-01

    Projected climate change in cold regions is expected to alter melt season duration and intensity, along with the number of extreme rainfall events, total annual precipitation and the balance between snowfall and rainfall. Similarly, changes to the thermal balance are expected to reduce the extent of permafrost and seasonal ground frost and increase active layer depths. These effects will undoubtedly change surface environments in cold regions and alter the fluxes of sediments, nutrients and solutes, but the absence of quantitative data and coordinated geomorphic process monitoring and analysis to understand the sensitivity of the Earth surface environment is acute in cold climate environments. The International Association of Geomorphologists (I.A.G. / A.I.G. ) SEDIBUD (Sediment Budgets in Cold Environments) Programme was formed in 2005 to address this existing key knowledge gap. SEDIBUD currently has about 400 members worldwide and the Steering Committee of this international programme is composed of ten scientists from eight different countries: Achim A. Beylich (Chair) (Norway), Armelle Decaulne (Secretary) (France), John C. Dixon (USA), Scott F. Lamoureux (Vice-Chair) (Canada), John F. Orwin (Canada), Jan-Christoph Otto (Austria), Irina Overeem (USA), Thorsteinn Sæmundsson (Iceland), Jeff Warburton (UK) and Zbigniew Zwolinski (Poland). The central research question of this global group of scientists is to: Assess and model the contemporary sedimentary fluxes in cold climates, with emphasis on both particulate and dissolved components. Initially formed as European Science Foundation (ESF) Network SEDIFLUX (Sedimentary Source-to-Sink Fluxes in Cold Environments) (2004 - ), SEDIBUD has further expanded to a global group of researchers with field research sites located in polar and alpine regions in the northern and southern hemisphere. Research carried out at each of the close to 50 defined SEDIBUD key test sites varies by programme, logistics and available resources, but typically represent interdisciplinary collaborations of geomorphologists, hydrologists, ecologists, permafrost scientists and glaciologists. SEDIBUD has developed manuals and protocols (SEDIFLUX Manual, available online, see below) with a key set of primary surface process monitoring and research data requirements to incorporate results from these diverse projects and allow coordinated quantitative analysis across the programme. Defined SEDIBUD key test sites provide data on annual climate conditions, total discharge and particulate and dissolved fluxes (yields) as well as information on other relevant surface processes. A number of selected key test sites is providing high-resolution data on climate conditions, runoff and sedimentary fluxes (yields), which in addition to the annual data contribute to the SEDIBUD metadata database. Comparable datasets from different SEDIBUD key test sites are integrated and analysed to address key research questions as defined in the SEDIBUD objective (available online, see below). Defined SEDIBUD key tasks for the coming years include (i) The continued generation and compilation of comparable longer-term datasets on contemporary sedimentary fluxes and sediment yields from SEDIBUD key test sites worldwide, (ii) The continued extension of the SEDIBUD metadata database with these datasets, (iii) The testing of defined SEDIBUD hypotheses (available online, see below) by using datasets continuously compiled in the SEDIBUD metadata database, (iv) The publication of a SEDIBUD book (synthesis book). Detailed information on the SEDIBUD Programme, SEDIBUD meetings, SEDIBUD publications and SEDIBUD online documents and databases is available at the SEDIBUD website under http://www.geomorph.org/wg/wgsb.html.

  2. The SEDIBUD (Sediment Budgets in Cold Environments) Programme: Current activities and future key tasks

    NASA Astrophysics Data System (ADS)

    Beylich, A. A.; Lamoureux, S. F.; Decaulne, A.

    2012-04-01

    Projected climate change in cold regions is expected to alter melt season duration and intensity, along with the number of extreme rainfall events, total annual precipitation and the balance between snowfall and rainfall. Similarly, changes to the thermal balance are expected to reduce the extent of permafrost and seasonal ground frost and increase active layer depths. These effects will undoubtedly change surface environments in cold regions and alter the fluxes of sediments, nutrients and solutes, but the absence of quantitative data and coordinated process monitoring and analysis to understand the sensitivity of the Earth surface environment is acute in cold climate environments. The International Association of Geomorphologists (I.A.G./A.I.G.)SEDIBUD (Sediment Budgets in Cold Environments) Programme was formed in 2005 to address this existing key knowledge gap. SEDIBUD currently has about 400 members worldwide and the Steering Committee of this international programme is composed of ten scientists from eight different countries: Achim A. Beylich (Chair) (Norway), Armelle Decaulne (Secretary) (France), John C. Dixon (USA), Scott F. Lamoureux (Vice-Chair) (Canada), John F. Orwin (Canada), Jan-Christoph Otto (Austria), Irina Overeem (USA), Thorsteinn Saemundsson (Iceland), Jeff Warburton (UK), Zbigniew Zwolinski (Poland). The central research question of this global group of scientists is to: Assess and model the contemporary sedimentary fluxes in cold climates, with emphasis on both particulate and dissolved components. Initially formed as European Science Foundation (ESF) Network SEDIFLUX (2004-2006), SEDIBUD has further expanded to a global group of researchers with field research sites located in polar and alpine regions in the northern and southern hemisphere. Research carried out at each of the close to 50 defined SEDIBUD key test sites varies by programme, logistics and available resources, but typically represent interdisciplinary collaborations of geomorphologists, hydrologists, ecologists, permafrost scientists and glaciologists. SEDIBUD has developed manuals and protocols (SEDIFLUX Manual, available online, see below) with a key set of primary surface process monitoring and research data requirements to incorporate results from these diverse projects and allow coordinated quantitative analysis across the programme. Defined SEDIBUD key test sites provide data on annual climate conditions, total discharge and particulate and dissolved fluxes as well as information on other relevant surface processes. A number of selected key test sites is providing high-resolution data on climate conditions, runoff and sedimentary fluxes, which in addition to the annual data contribute to the SEDIBUD metadata database which is currently developed. Comparable datasets from different SEDIBUD key test sites are integrated and analysed to address key research questions as defined in the SEDIBUD Objective (available online, see below). Defined SEDIBUD key tasks for the coming years include (i) The continued generation and compilation of comparable longer-term datasets on contemporary sedimentary fluxes and sediment yields from SEDIBUD key test sites worldwide, (ii) The continued extension of the SEDIBUD metadata database with these datasets, (iii) The testing of defined SEDIBUD hypotheses (available online, see below) by using the datasets continuously compiled in the SEDIBUD metadata database. Detailed information on the I.A.G./A.I.G. SEDIBUD Programme, SEDIBUD meetings, SEDIBUD publications and SEDIBUD online documents and databases is available at the SEDIBUD website under http://www.geomorph.org/wg/wgsb.html.

  3. The I.A.G./A.I.G. SEDIBUD (Sediment Budgets in Cold Environments) Program (2005 - 2017): Key activities and outcomes

    NASA Astrophysics Data System (ADS)

    Beylich, Achim A.

    2017-04-01

    Amplified climate change and ecological sensitivity of high-latitude and high-altitude cold climate environments has been highlighted as a key global environmental issue. Projected climate change in largely undisturbed cold regions is expected to alter melt-season duration and intensity, along with the number of extreme rainfall events, total annual precipitation and the balance between snowfall and rainfall. Similarly, changes to the thermal balance are expected to reduce the extent of permafrost and seasonal ground frost and increase active-layer depths. These combined effects will undoubtedly change Earth surface environments in cold regions and will alter the fluxes of sediments, solutes and nutrients. However, the absence of quantitative data and coordinated analysis to understand the sensitivity of the Earth surface environment are acute in cold regions. Contemporary cold climate environments generally provide the opportunity to identify solute and sedimentary systems where anthropogenic impacts are still less important than the effects of climate change. Accordingly, it is still possible to develop a library of baseline fluvial yields and sedimentary budgets before the natural environment is completely transformed. The SEDIBUD (Sediment Budgets in Cold Environments) Program, building on the European Science Foundation (ESF) Network SEDIFLUX (Sedimentary Source-to-Sink Fluxes in Cold Environments, since 2004) was formed in 2005 as a new Program (Working Group) of the International Association of Geomorphologists (I.A.G./A.I.G.) to address this still existing key knowledge gap. SEDIBUD (2005-2017) has currently about 400 members worldwide and the Steering Committee of this international program is composed of eleven scientists from ten different countries. The central research question of this global program is to: Assess and model the contemporary sedimentary fluxes in cold climates, with emphasis on both particulate and dissolved components. Research carried out at 56 defined SEDIBUD key test sites (selected catchment systems) varies by scientific program, logistics and available resources, but typically represent interdisciplinary collaborations of geomorphologists, hydrologists, ecologists, permafrost scientists and glaciologists with different levels of detail. SEDIBUD has developed a key set of primary research data requirements intended to incorporate results from these varied projects and allow quantitative analysis across the program. Defined SEDIBUD key test sites provide field data on annual climatic conditions, total discharge and particulate and dissolved fluxes and yields as well as information on other relevant denudational Earth surface processes. A number of selected key test sites are providing high-resolution data on climatic conditions, runoff and solute and sedimentary fluxes and yields, which - in addition to the annual data - contribute to the SEDIBUD metadata database. To support these coordinated efforts, the SEDIFLUX manual and a set of framework papers and book chapters have been produced to establish the integrative approach and common methods and data standards. Comparable field-datasets from different SEDIBUD key test sites are analyzed and integrated to address key research questions of the SEDIBUD program as defined in the SEDIBUD working group objective. A key SEDIBUD synthesis book was published in 2016 by the group and a synthesis key paper is currently in preparation. Detailed information on all SEDIBUD activities, outcomes and published products is found at http://www.geomorph.org/sedibud-working-group/.

  4. Special Issue ;Sediment cascades in cold climate geosystems;

    NASA Astrophysics Data System (ADS)

    Morche, David; Krautblatter, Michael; Beylich, Achim A.

    2017-06-01

    This Editorial introduces the Special Issue on sediment cascades in cold climate geosystems that evolved from the eighth I.A.G./A.I.G. SEDIBUD (Sediment Budgets in Cold Environments; http://www.geomorph.org/sedibud-working-group/) workshop. The workshop was held from 1st to 4th September 2014 at the Environmental Research Station ;Schneefernerhaus; (http://www.schneefernerhaus.de/en/home.html) located at Mt. Zugspitze, the highest peak of Germany, (2962 m asl). Paper and poster presentations focused on observations, measurements and modeling of geomorphological processes in sediment cascades in cold climate geosystems. This resulting Special Issue brings together ten selected contributions from arctic and alpine environments.

  5. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  6. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099

  7. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  8. The I.A.G. / A.I.G. SEDIBUD Book Project: Source-to-Sink Fluxes in Undisturbed Cold Environments

    NASA Astrophysics Data System (ADS)

    Beylich, Achim A.; Dixon, John C.; Zwolinski, Zbigniew

    2015-04-01

    The currently prepared SEDIBUD Book on "Source-to-Sink Fluxes in Undisturbed Cold Environments" (edited by Achim A. Beylich, John C. Dixon and Zbigniew Zwolinski and published by Cambridge University Press) is summarizing and synthesizing the achievements of the International Association of Geomorphologists` (I.A.G./A.I.G.) Working Group SEDIBUD (Sediment Budgets in Cold Environments), which has been active since 2005 (http://www.geomorph.org/wg/wgsb.html). Amplified climate change and ecological sensitivity of largely undisturbed polar and high-altitude cold climate environments have been highlighted as key global environmental issues. The effects of projected climate change will change surface environments in cold regions and will alter the fluxes of sediments, nutrients and solutes, but the absence of quantitative data and coordinated geomorphic process monitoring and analysis to understand the sensitivity of the Earth surface environment in these largely undisturbed environments is acute. Our book addresses this existing key knowledge gap. The applied approach of integrating comparable and longer-term field datasets on contemporary solute and sedimentary fluxes from a number of different defined cold climate catchment geosystems for better understanding (i) the environmental drivers and rates of contemporary denudational surface processes and (ii) possible effects of projected climate change in cold regions is unique in the field of geomorphology. Largely undisturbed cold climate environments can provide baseline data for modeling the effects of environmental change. The book synthesizes work carried out by numerous SEDIBUD Members over the last decade in numerous cold climate catchment geosystems worldwide. For reaching a global cover of different cold climate environments the book is - after providing an introduction part and a basic part on climate change in cold environments and general implications for solute and sedimentary fluxes - dealing in different defined parts with Sub-Arctic and Arctic Environments, Sub-Antarctic and Antarctic Environments, and Alpine / Mountain Environments. The book includes a synthesis key chapter where comparable datasets on contemporary solute and sedimentary fluxes generated during the conducted coordinated research efforts in different cold climate catchment geosystems are integrated with the key goals to (i) identify the main environmental drivers and rates of contemporary solute and sedimentary fluxes, and (ii) model possible effects of projected climate change on solute and sedimentary fluxes in cold climate environments. The SEDIBUD Book provides new key findings on environmental drivers and rates of contemporary solute and sedimentary fluxes, and on spatial variability within global cold climate environments. The book will go in production in July 2015.

  9. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  10. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for making/linking metadata databases. The development of effective data systems will contribute to the progress of scientific research on solar terrestrial physics, climate and the geophysical environment. Any kind of cooperation, metadata input and feedback, especially for linkage of the databases, is welcomed. References 1. Hayashi, H. et al., Inter-university Upper Atmosphere Global Observation Network (IUGONET), Data Sci. J., 12, WDS179-184, 2013. 2. King, T. et al., SPASE 2.0: A standard data model for space physics. Earth Sci. Inform. 3, 67-73, 2010, doi:10.1007/s12145-010-0053-4. 3. Hori, T., et al., Development of IUGONET metadata format and metadata management system. J. Space Sci. Info. Jpn., 105-111, 2012. (in Japanese)

  11. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    ERIC Educational Resources Information Center

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  12. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268

  13. Construction of a robust, large-scale, collaborative database for raw data in computational chemistry: the Collaborative Chemistry Database Tool (CCDBT).

    PubMed

    Chen, Mingyang; Stott, Amanda C; Li, Shenggang; Dixon, David A

    2012-04-01

    A robust metadata database called the Collaborative Chemistry Database Tool (CCDBT) for massive amounts of computational chemistry raw data has been designed and implemented. It performs data synchronization and simultaneously extracts the metadata. Computational chemistry data in various formats from different computing sources, software packages, and users can be parsed into uniform metadata for storage in a MySQL database. Parsing is performed by a parsing pyramid, including parsers written for different levels of data types and sets created by the parser loader after loading parser engines and configurations. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Metadata (MD)

    Treesearch

    Robert E. Keane

    2006-01-01

    The Metadata (MD) table in the FIREMON database is used to record any information about the sampling strategy or data collected using the FIREMON sampling procedures. The MD method records metadata pertaining to a group of FIREMON plots, such as all plots in a specific FIREMON project. FIREMON plots are linked to metadata using a unique metadata identifier that is...

  15. The STP (Solar-Terrestrial Physics) Semantic Web based on the RSS1.0 and the RDF

    NASA Astrophysics Data System (ADS)

    Kubo, T.; Murata, K. T.; Kimura, E.; Ishikura, S.; Shinohara, I.; Kasaba, Y.; Watari, S.; Matsuoka, D.

    2006-12-01

    In the Solar-Terrestrial Physics (STP), it is pointed out that circulation and utilization of observation data among researchers are insufficient. To archive interdisciplinary researches, we need to overcome this circulation and utilization problems. Under such a background, authors' group has developed a world-wide database that manages meta-data of satellite and ground-based observation data files. It is noted that retrieving meta-data from the observation data and registering them to database have been carried out by hand so far. Our goal is to establish the STP Semantic Web. The Semantic Web provides a common framework that allows a variety of data shared and reused across applications, enterprises, and communities. We also expect that the secondary information related with observations, such as event information and associated news, are also shared over the networks. The most fundamental issue on the establishment is who generates, manages and provides meta-data in the Semantic Web. We developed an automatic meta-data collection system for the observation data using the RSS (RDF Site Summary) 1.0. The RSS1.0 is one of the XML-based markup languages based on the RDF (Resource Description Framework), which is designed for syndicating news and contents of news-like sites. The RSS1.0 is used to describe the STP meta-data, such as data file name, file server address and observation date. To describe the meta-data of the STP beyond RSS1.0 vocabulary, we defined original vocabularies for the STP resources using the RDF Schema. The RDF describes technical terms on the STP along with the Dublin Core Metadata Element Set, which is standard for cross-domain information resource descriptions. Researchers' information on the STP by FOAF, which is known as an RDF/XML vocabulary, creates a machine-readable metadata describing people. Using the RSS1.0 as a meta-data distribution method, the workflow from retrieving meta-data to registering them into the database is automated. This technique is applied for several database systems, such as the DARTS database system and NICT Space Weather Report Service. The DARTS is a science database managed by ISAS/JAXA in Japan. We succeeded in generating and collecting the meta-data automatically for the CDF (Common data Format) data, such as Reimei satellite data, provided by the DARTS. We also create an RDF service for space weather report and real-time global MHD simulation 3D data provided by the NICT. Our Semantic Web system works as follows: The RSS1.0 documents generated on the data sites (ISAS and NICT) are automatically collected by a meta-data collection agent. The RDF documents are registered and the agent extracts meta-data to store them in the Sesame, which is an open source RDF database with support for RDF Schema inferencing and querying. The RDF database provides advanced retrieval processing that has considered property and relation. Finally, the STP Semantic Web provides automatic processing or high level search for the data which are not only for observation data but for space weather news, physical events, technical terms and researches information related to the STP.

  16. CERES Search and Subset Tool

    Atmospheric Science Data Center

    2016-06-24

    ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ... data granules using a high resolution spatial metadata database and directly accessing the archived data granules. Subset results are ...

  17. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  18. Global land information system (GLIS) access to worldwide Landsat data

    USGS Publications Warehouse

    Smith, Timothy B.; Goodale, Katherine L.

    1993-01-01

    The Landsat Technical Working Group (LTWG) and the Landsat Ground Station Operations Working Group (LGSOWG) have encouraged Landsat receiving stations around the world to share information about their data holdings through the exchange of metadata records. Receiving stations forward their metadata records to the U.S. Geological Survey's EROS Data Center (EDC) on a quarterly basis. The EDC maintains the records for each station, coordinates changes to the database, and provides metadata to the stations as requested. The result is a comprehensive international database listing most of the world's Landsat data acquisitions This exchange of information began in the early 1980's with the inclusion in the EDC database os scenes acquired by a receiving station in Italy. Through the years other stations have agreed to participate; currently ten of the seventeen stations actively share their metadata records. Coverage maps have been generated to depict the status of the database. The Worldwide Landsat database is also available though the Global Land Information System (GLIS).

  19. A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations

    NASA Astrophysics Data System (ADS)

    Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.

    2015-12-01

    To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list

  20. Database integration in a multimedia-modeling environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include toolsmore » to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.« less

  1. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata.

    PubMed

    Hitz, Benjamin C; Rowe, Laurence D; Podduturi, Nikhil R; Glick, David I; Baymuradov, Ulugbek K; Malladi, Venkat S; Chan, Esther T; Davidson, Jean M; Gabdank, Idan; Narayana, Aditi K; Onate, Kathrina C; Hilton, Jason; Ho, Marcus C; Lee, Brian T; Miyasato, Stuart R; Dreszer, Timothy R; Sloan, Cricket A; Strattan, J Seth; Tanaka, Forrest Y; Hong, Eurie L; Cherry, J Michael

    2017-01-01

    The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package.

  2. SnoVault and encodeD: A novel object-based storage system and applications to ENCODE metadata

    PubMed Central

    Podduturi, Nikhil R.; Glick, David I.; Baymuradov, Ulugbek K.; Malladi, Venkat S.; Chan, Esther T.; Davidson, Jean M.; Gabdank, Idan; Narayana, Aditi K.; Onate, Kathrina C.; Hilton, Jason; Ho, Marcus C.; Lee, Brian T.; Miyasato, Stuart R.; Dreszer, Timothy R.; Sloan, Cricket A.; Strattan, J. Seth; Tanaka, Forrest Y.; Hong, Eurie L.; Cherry, J. Michael

    2017-01-01

    The Encyclopedia of DNA elements (ENCODE) project is an ongoing collaborative effort to create a comprehensive catalog of functional elements initiated shortly after the completion of the Human Genome Project. The current database exceeds 6500 experiments across more than 450 cell lines and tissues using a wide array of experimental techniques to study the chromatin structure, regulatory and transcriptional landscape of the H. sapiens and M. musculus genomes. All ENCODE experimental data, metadata, and associated computational analyses are submitted to the ENCODE Data Coordination Center (DCC) for validation, tracking, storage, unified processing, and distribution to community resources and the scientific community. As the volume of data increases, the identification and organization of experimental details becomes increasingly intricate and demands careful curation. The ENCODE DCC has created a general purpose software system, known as SnoVault, that supports metadata and file submission, a database used for metadata storage, web pages for displaying the metadata and a robust API for querying the metadata. The software is fully open-source, code and installation instructions can be found at: http://github.com/ENCODE-DCC/snovault/ (for the generic database) and http://github.com/ENCODE-DCC/encoded/ to store genomic data in the manner of ENCODE. The core database engine, SnoVault (which is completely independent of ENCODE, genomic data, or bioinformatic data) has been released as a separate Python package. PMID:28403240

  3. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  4. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  5. The Primate Life History Database: A unique shared ecological data resource

    PubMed Central

    Strier, Karen B.; Altmann, Jeanne; Brockman, Diane K.; Bronikowski, Anne M.; Cords, Marina; Fedigan, Linda M.; Lapp, Hilmar; Liu, Xianhua; Morris, William F.; Pusey, Anne E.; Stoinski, Tara S.; Alberts, Susan C.

    2011-01-01

    Summary The importance of data archiving, data sharing, and public access to data has received considerable attention. Awareness is growing among scientists that collaborative databases can facilitate these activities.We provide a detailed description of the collaborative life history database developed by our Working Group at the National Evolutionary Synthesis Center (NESCent) to address questions about life history patterns and the evolution of mortality and demographic variability in wild primates.Examples from each of the seven primate species included in our database illustrate the range of data incorporated and the challenges, decision-making processes, and criteria applied to standardize data across diverse field studies. In addition to the descriptive and structural metadata associated with our database, we also describe the process metadata (how the database was designed and delivered) and the technical specifications of the database.Our database provides a useful model for other researchers interested in developing similar types of databases for other organisms, while our process metadata may be helpful to other groups of researchers interested in developing databases for other types of collaborative analyses. PMID:21698066

  6. Predicting structured metadata from unstructured metadata.

    PubMed

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.

  7. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  8. Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.

    ERIC Educational Resources Information Center

    Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi

    2002-01-01

    Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…

  9. NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases.

    PubMed

    Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin; Senger, Philipp

    2015-01-01

    Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article's supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in collaboration with domain disease experts. We elucidate the step-by-step guidelines used to critically prioritize studies from public archives and their metadata curation and discuss the key challenges encountered. Curated metadata for Alzheimer's disease gene expression studies are available for download. Database URL: www.scai.fraunhofer.de/NeuroTransDB.html. © The Author(s) 2015. Published by Oxford University Press.

  10. NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases

    PubMed Central

    Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin

    2015-01-01

    Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article’s supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in collaboration with domain disease experts. We elucidate the step-by-step guidelines used to critically prioritize studies from public archives and their metadata curation and discuss the key challenges encountered. Curated metadata for Alzheimer’s disease gene expression studies are available for download. Database URL: www.scai.fraunhofer.de/NeuroTransDB.html PMID:26475471

  11. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  12. Digital Initiatives and Metadata Use in Thailand

    ERIC Educational Resources Information Center

    SuKantarat, Wichada

    2008-01-01

    Purpose: This paper aims to provide information about various digital initiatives in libraries in Thailand and especially use of Dublin Core metadata in cataloguing digitized objects in academic and government digital databases. Design/methodology/approach: The author began researching metadata use in Thailand in 2003 and 2004 while on sabbatical…

  13. Extension modules for storage, visualization and querying of genomic, genetic and breeding data in Tripal databases

    PubMed Central

    Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie

    2017-01-01

    Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/

  14. Metadata tables to enable dynamic data modeling and web interface design: the SEER example.

    PubMed

    Weiner, Mark; Sherr, Micah; Cohen, Abigail

    2002-04-01

    A wealth of information addressing health status, outcomes and resource utilization is compiled and made available by various government agencies. While exploration of the data is possible using existing tools, in general, would-be users of the resources must acquire CD-ROMs or download data from the web, and upload the data into their own database. Where web interfaces exist, they are highly structured, limiting the kinds of queries that can be executed. This work develops a web-based database interface engine whose content and structure is generated through interaction with a metadata table. The result is a dynamically generated web interface that can easily accommodate changes in the underlying data model by altering the metadata table, rather than requiring changes to the interface code. This paper discusses the background and implementation of the metadata table and web-based front end and provides examples of its use with the NCI's Surveillance, Epidemiology and End-Results (SEER) database.

  15. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    NASA Astrophysics Data System (ADS)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  16. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  17. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    PubMed Central

    Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.

    2015-01-01

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402

  18. Studies of Big Data metadata segmentation between relational and non-relational databases

    NASA Astrophysics Data System (ADS)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  19. A searching and reporting system for relational databases using a graph-based metadata representation.

    PubMed

    Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling

    2005-01-01

    Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.

  20. Describing environmental public health data: implementing a descriptive metadata standard on the environmental public health tracking network.

    PubMed

    Patridge, Jeff; Namulanda, Gonza

    2008-01-01

    The Environmental Public Health Tracking (EPHT) Network provides an opportunity to bring together diverse environmental and health effects data by integrating}?> local, state, and national databases of environmental hazards, environmental exposures, and health effects. To help users locate data on the EPHT Network, the network will utilize descriptive metadata that provide critical information as to the purpose, location, content, and source of these data. Since 2003, the Centers for Disease Control and Prevention's EPHT Metadata Subgroup has been working to initiate the creation and use of descriptive metadata. Efforts undertaken by the group include the adoption of a metadata standard, creation of an EPHT-specific metadata profile, development of an open-source metadata creation tool, and promotion of the creation of descriptive metadata by changing the perception of metadata in the public health culture.

  1. The MAR databases: development and implementation of databases specific for marine metagenomics

    PubMed Central

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen

    2018-01-01

    Abstract We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. PMID:29106641

  2. PS1-41: Just Add Data: Implementing an Event-Based Data Model for Clinical Trial Tracking

    PubMed Central

    Fuller, Sharon; Carrell, David; Pardee, Roy

    2012-01-01

    Background/Aims Clinical research trials often have similar fundamental tracking needs, despite being quite variable in their specific logic and activities. A model tracking database that can be quickly adapted by a variety of studies has the potential to achieve significant efficiencies in database development and maintenance. Methods Over the course of several different clinical trials, we have developed a database model that is highly adaptable to a variety of projects. Rather than hard-coding each specific event that might occur in a trial, along with its logical consequences, this model considers each event and its parameters to be a data record in its own right. Each event may have related variables (metadata) describing its prerequisites, subsequent events due, associated mailings, or events that it overrides. The metadata for each event is stored in the same record with the event name. When changes are made to the study protocol, no structural changes to the database are needed. One has only to add or edit events and their metadata. Changes in the event metadata automatically determine any related logic changes. In addition to streamlining application code, this model simplifies communication between the programmer and other team members. Database requirements can be phrased as changes to the underlying data, rather than to the application code. The project team can review a single report of events and metadata and easily see where changes might be needed. In addition to benefitting from streamlined code, the front end database application can also implement useful standard features such as automated mail merges and to do lists. Results The event-based data model has proven itself to be robust, adaptable and user-friendly in a variety of study contexts. We have chosen to implement it as a SQL Server back end and distributed Access front end. Interested readers may request a copy of the Access front end and scripts for creating the back end database. Discussion An event-based database with a consistent, robust set of features has the potential to significantly reduce development time and maintenance expense for clinical trial tracking databases.

  3. Automated Database Mediation Using Ontological Metadata Mappings

    PubMed Central

    Marenco, Luis; Wang, Rixin; Nadkarni, Prakash

    2009-01-01

    Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801

  4. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  5. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  6. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  7. The Genomes On Line Database (GOLD) in 2007: status of genomic and metagenomic projects and their associated metadata.

    PubMed

    Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C

    2008-01-01

    The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence' (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/

  8. The Genomes On Line Database (GOLD) in 2007: status of genomic and metagenomic projects and their associated metadata

    PubMed Central

    Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C.

    2008-01-01

    The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence’ (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/ PMID:17981842

  9. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  10. Public Participation in Earth Science from the ISS

    NASA Technical Reports Server (NTRS)

    Willis, Kimberly J.; Runco, Susan K.; Stefanov, William L.

    2010-01-01

    The Gateway to Astronaut Photography of Earth (GAPE) is an online database (http://eol.jsc.nasa.gov) of terrestrial astronaut photography that enables the public to experience the astronaut s view from orbit. This database of imagery includes all NASA human-directed missions from the Mercury program of the early 1960 s to the current International Space Station (ISS). To date, the total number of images taken by astronauts is 1,025,333. Of the total, 621,316 images have been "cataloged" (image geographic center points determined and descriptive metadata added). The remaining imagery provides an opportunity for the citizen-scientist to become directly involved with NASA through cataloging of astronaut photography, while simultaneously experiencing the wonder and majesty of our home planet as seen by astronauts on board the ISS every day. We are currently developing a public cataloging interface for the GAPE website. When complete, the citizen-scientist will be able to access a selected subset of astronaut imagery. Each candidate will be required to pass a training tutorial in order to receive certification as a cataloger. The cataloger can then choose from a selection of images with basic metadata that is sorted by difficulty levels. Some guidance will be provided (template/pull down menus) for generation of geographic metadata required from the cataloger for each photograph. Each cataloger will also be able to view other contributions and further edit that metadata if they so choose. After the public inputs their metadata the images will be posted to an internal screening site. Images with similar geographic metadata and centerpoint coordinates from multiple catalogers will be reviewed by NASA JSC Crew Earth Observations (CEO) staff. Once reviewed and verified, the metadata will be entered into the GAPE database with the contributors identified by their chosen usernames as having cataloged the frame.

  11. Trends in the Evolution of the Public Web, 1998-2002; The Fedora Project: An Open-source Digital Object Repository Management System; State of the Dublin Core Metadata Initiative, April 2003; Preservation Metadata; How Many People Search the ERIC Database Each Day?

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.

    2003-01-01

    Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…

  12. The Genomes OnLine Database (GOLD) v.5: a metadata management system based on a four level (meta)genome project classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri

    The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less

  13. SCRIPDB: a portal for easy access to syntheses, chemicals and reactions in patents

    PubMed Central

    Heifets, Abraham; Jurisica, Igor

    2012-01-01

    The patent literature is a rich catalog of biologically relevant chemicals; many public and commercial molecular databases contain the structures disclosed in patent claims. However, patents are an equally rich source of metadata about bioactive molecules, including mechanism of action, disease class, homologous experimental series, structural alternatives, or the synthetic pathways used to produce molecules of interest. Unfortunately, this metadata is discarded when chemical structures are deposited separately in databases. SCRIPDB is a chemical structure database designed to make this metadata accessible. SCRIPDB provides the full original patent text, reactions and relationships described within any individual patent, in addition to the molecular files common to structural databases. We discuss how such information is valuable in medical text mining, chemical image analysis, reaction extraction and in silico pharmaceutical lead optimization. SCRIPDB may be searched by exact chemical structure, substructure or molecular similarity and the results may be restricted to patents describing synthetic routes. SCRIPDB is available at http://dcv.uhnres.utoronto.ca/SCRIPDB. PMID:22067445

  14. The MAR databases: development and implementation of databases specific for marine metagenomics.

    PubMed

    Klemetsen, Terje; Raknes, Inge A; Fu, Juan; Agafonov, Alexander; Balasundaram, Sudhagar V; Tartari, Giacomo; Robertsen, Espen; Willassen, Nils P

    2018-01-04

    We introduce the marine databases; MarRef, MarDB and MarCat (https://mmp.sfb.uit.no/databases/), which are publicly available resources that promote marine research and innovation. These data resources, which have been implemented in the Marine Metagenomics Portal (MMP) (https://mmp.sfb.uit.no/), are collections of richly annotated and manually curated contextual (metadata) and sequence databases representing three tiers of accuracy. While MarRef is a database for completely sequenced marine prokaryotic genomes, which represent a marine prokaryote reference genome database, MarDB includes all incomplete sequenced prokaryotic genomes regardless level of completeness. The last database, MarCat, represents a gene (protein) catalog of uncultivable (and cultivable) marine genes and proteins derived from marine metagenomics samples. The first versions of MarRef and MarDB contain 612 and 3726 records, respectively. Each record is built up of 106 metadata fields including attributes for sampling, sequencing, assembly and annotation in addition to the organism and taxonomic information. Currently, MarCat contains 1227 records with 55 metadata fields. Ontologies and controlled vocabularies are used in the contextual databases to enhance consistency. The user-friendly web interface lets the visitors browse, filter and search in the contextual databases and perform BLAST searches against the corresponding sequence databases. All contextual and sequence databases are freely accessible and downloadable from https://s1.sfb.uit.no/public/mar/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    NASA Astrophysics Data System (ADS)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  16. MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies.

    PubMed

    Kumar, Pankaj; Halama, Anna; Hayat, Shahina; Billing, Anja M; Gupta, Manish; Yousri, Noha A; Smith, Gregory M; Suhre, Karsten

    2015-01-01

    The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control) to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments). Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA), Biosample, Bioprojects, and Gene Expression Omnibus (GEO). Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe "MetaRNA-Seq," a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.

  17. SQLGEN: a framework for rapid client-server database application development.

    PubMed

    Nadkarni, P M; Cheung, K H

    1995-12-01

    SQLGEN is a framework for rapid client-server relational database application development. It relies on an active data dictionary on the client machine that stores metadata on one or more database servers to which the client may be connected. The dictionary generates dynamic Structured Query Language (SQL) to perform common database operations; it also stores information about the access rights of the user at log-in time, which is used to partially self-configure the behavior of the client to disable inappropriate user actions. SQLGEN uses a microcomputer database as the client to store metadata in relational form, to transiently capture server data in tables, and to allow rapid application prototyping followed by porting to client-server mode with modest effort. SQLGEN is currently used in several production biomedical databases.

  18. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for intercomparison. The optimal data transfer protocol is being investigated to insure the interoperability. To facilitate satellite and model validation, tools will be made available for co-location and comparison with IAGOS. We will enhance the JOIN application in order to properly display aircraft data as vertical profiles and along individual flight tracks and to allow for graphical comparison to model results that are accessible through interoperable web services, such as the daily products from the GMES/Copernicus atmospheric service.

  19. Managing Data, Provenance and Chaos through Standardization and Automation at the Georgia Coastal Ecosystems LTER Site

    NASA Astrophysics Data System (ADS)

    Sheldon, W.

    2013-12-01

    Managing data for a large, multidisciplinary research program such as a Long Term Ecological Research (LTER) site is a significant challenge, but also presents unique opportunities for data stewardship. LTER research is conducted within multiple organizational frameworks (i.e. a specific LTER site as well as the broader LTER network), and addresses both specific goals defined in an NSF proposal as well as broader goals of the network; therefore, every LTER data can be linked to rich contextual information to guide interpretation and comparison. The challenge is how to link the data to this wealth of contextual metadata. At the Georgia Coastal Ecosystems LTER we developed an integrated information management system (GCE-IMS) to manage, archive and distribute data, metadata and other research products as well as manage project logistics, administration and governance (figure 1). This system allows us to store all project information in one place, and provide dynamic links through web applications and services to ensure content is always up to date on the web as well as in data set metadata. The database model supports tracking changes over time in personnel roles, projects and governance decisions, allowing these databases to serve as canonical sources of project history. Storing project information in a central database has also allowed us to standardize both the formatting and content of critical project information, including personnel names, roles, keywords, place names, attribute names, units, and instrumentation, providing consistency and improving data and metadata comparability. Lookup services for these standard terms also simplify data entry in web and database interfaces. We have also coupled the GCE-IMS to our MATLAB- and Python-based data processing tools (i.e. through database connections) to automate metadata generation and packaging of tabular and GIS data products for distribution. Data processing history is automatically tracked throughout the data lifecycle, from initial import through quality control, revision and integration by our data processing system (GCE Data Toolbox for MATLAB), and included in metadata for versioned data products. This high level of automation and system integration has proven very effective in managing the chaos and scalability of our information management program.

  20. Data Publication and Interoperability for Long Tail Researchers via the Open Data Repository's (ODR) Data Publisher.

    NASA Astrophysics Data System (ADS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.

    2016-12-01

    Working primarily with astrobiology researchers at NASA Ames, the Open Data Repository (ODR) has been conducting a software pilot to meet the varying needs of this multidisciplinary community. Astrobiology researchers often have small communities or operate individually with unique data sets that don't easily fit into existing database structures. The ODR constructed its Data Publisher software to allow researchers to create databases with common metadata structures and subsequently extend them to meet their individual needs and data requirements. The software accomplishes these tasks through a web-based interface that allows collaborative creation and revision of common metadata templates and individual extensions to these templates for custom data sets. This allows researchers to search disparate datasets based on common metadata established through the metadata tools, but still facilitates distinct analyses and data that may be stored alongside the required common metadata. The software produces web pages that can be made publicly available at the researcher's discretion so that users may search and browse the data in an effort to make interoperability and data discovery a human-friendly task while also providing semantic data for machine-based discovery. Once relevant data has been identified, researchers can utilize the built-in application programming interface (API) that exposes the data for machine-based consumption and integration with existing data analysis tools (e.g. R, MATLAB, Project Jupyter - http://jupyter.org). The current evolution of the project has created the Astrobiology Habitable Environments Database (AHED)[1] which provides an interface to databases connected through a common metadata core. In the next project phase, the goal is for small research teams and groups to be self-sufficient in publishing their research data to meet funding mandates and academic requirements as well as fostering increased data discovery and interoperability through human-readable and machine-readable interfaces. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL. [1] B. Lafuente et al. (2016) AGU, submitted.

  1. Design of Community Resource Inventories as a Component of Scalable Earth Science Infrastructure: Experience of the Earthcube CINERGI Project

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.

    2014-12-01

    EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project components are described at the project's website (http://workspace.earthcube.org/cinergi), which also provides access to the initial resource inventory, the inventory metadata model, metadata entry forms and a collection of the community resource viewers.

  2. Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.

    PubMed

    Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel

    2012-01-01

    Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012

  3. Database technology and the management of multimedia data in the Mirror project

    NASA Astrophysics Data System (ADS)

    de Vries, Arjen P.; Blanken, H. M.

    1998-10-01

    Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.

  4. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  5. Public Participation in Earth Science from the Iss

    NASA Astrophysics Data System (ADS)

    Willis, K. J.; Runco, S.; Stefanov, W. L.

    2010-12-01

    The Gateway to Astronaut Photography of Earth (GAPE) is an online database (http://eol.jsc.nasa.gov) of terrestrial astronaut photography that enables the public to experience the astronaut’s view from orbit. This database of imagery includes all NASA human-directed missions from the Mercury program of the early 1960’s to the current International Space Station (ISS). To date, the total number of images taken by astronauts is 1,025,333. Of the total, 621,316 images have been “cataloged” (image geographic center points determined and descriptive metadata added). The remaining imagery provides an opportunity for the citizen-scientist to become directly involved with NASA through cataloging of astronaut photography, while simultaneously experiencing the wonder and majesty of our home planet as seen by astronauts on board the ISS every day. We are currently developing a public cataloging interface for the GAPE website. When complete, the citizen-scientist will be able to access a selected subset of astronaut imagery. Each candidate will be required to pass a training tutorial in order to receive certification as a cataloger. The cataloger can then choose from a selection of images with basic metadata that is sorted by difficulty levels. Some guidance will be provided (template/pull down menus) for generation of geographic metadata required from the cataloger for each photograph. Each cataloger will also be able to view other contributions and further edit that metadata if they so choose. After the public inputs their metadata the images will be posted to an internal screening site. Images with similar geographic metadata and centerpoint coordinates from multiple catalogers will be reviewed by NASA JSC Crew Earth Observations (CEO) staff. Once reviewed and verified, the metadata will be entered into the GAPE database with the contributors identified by their chosen usernames as having cataloged the frame.

  6. Constructing compact and effective graphs for recommender systems via node and edge aggregations

    DOE PAGES

    Lee, Sangkeun; Kahng, Minsuk; Lee, Sang-goo

    2014-12-10

    Exploiting graphs for recommender systems has great potential to flexibly incorporate heterogeneous information for producing better recommendation results. As our baseline approach, we first introduce a naive graph-based recommendation method, which operates with a heterogeneous log-metadata graph constructed from user log and content metadata databases. Although the na ve graph-based recommendation method is simple, it allows us to take advantages of heterogeneous information and shows promising flexibility and recommendation accuracy. However, it often leads to extensive processing time due to the sheer size of the graphs constructed from entire user log and content metadata databases. In this paper, we proposemore » node and edge aggregation approaches to constructing compact and e ective graphs called Factor-Item bipartite graphs by aggregating nodes and edges of a log-metadata graph. Furthermore, experimental results using real world datasets indicate that our approach can significantly reduce the size of graphs exploited for recommender systems without sacrificing the recommendation quality.« less

  7. Metadata to Support Data Warehouse Evolution

    NASA Astrophysics Data System (ADS)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  8. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    NASA Astrophysics Data System (ADS)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded the metadata from the netCDF files into a mySQL database. The combination of the mySQL database and the controlled names makes it possible to automate the construction of user interfaces and standard format metadata descriptions, like Federal Geographic Data Committee (FGDC) and Directory Interchange Format (DIF). These standard descriptions also include an association between our controlled names and standard keywords such as those developed by the Global Change Master Directory (GCMD). This talk will give an overview of each of these technology and metadata standards as it applies to work at the Climate Diagnostics Center. The talk will also discuss the pros and cons of each approach and discuss areas for future development.

  9. Using semantic data modeling techniques to organize an object-oriented database for extending the mass storage model

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik

    1991-01-01

    A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.

  10. Reinforcement learning interfaces for biomedical database systems.

    PubMed

    Rudowsky, I; Kulyba, O; Kunin, M; Parsons, S; Raphan, T

    2006-01-01

    Studies of neural function that are carried out in different laboratories and that address different questions use a wide range of descriptors for data storage, depending on the laboratory and the individuals that input the data. A common approach to describe non-textual data that are referenced through a relational database is to use metadata descriptors. We have recently designed such a prototype system, but to maintain efficiency and a manageable metadata table, free formatted fields were designed as table entries. The database interface application utilizes an intelligent agent to improve integrity of operation. The purpose of this study was to investigate how reinforcement learning algorithms can assist the user in interacting with the database interface application that has been developed to improve the performance of the system.

  11. Staff - Simone Montayne | Alaska Division of Geological & Geophysical

    Science.gov Websites

    Alaska's Mineral Industry Reports AKGeology.info Rare Earth Elements WebGeochem Engineering Geology Alaska Preservation Workshop Professional Experience Metadata - Simone compiles all of the division's metadata files Professional Activities Website and database administrator for the Association of American State Geologists

  12. Making Information Visible, Accessible, and Understandable: Meta-Data and Registries

    DTIC Science & Technology

    2007-07-01

    the data created, the length of play time, album name, and the genre. Without resource metadata, portable digital music players would not be so...notion of a catalog card in a library. An example of metadata is the description of a music file specifying the creator, the artist that performed the song...describe struc- ture and formatting which are critical to interoperability and the management of databases. Going back to the portable music player example

  13. maxdLoad2 and maxdBrowse: standards-compliant tools for microarray experimental annotation, data management and dissemination.

    PubMed

    Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy

    2005-11-03

    maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.

  14. International cooperation between Japanese IUGONET and EU ESPAS projects on development of the metadata database for upper atmospheric study

    NASA Astrophysics Data System (ADS)

    Yatagai, Akiyo; Ritschel, Bernd; Iyemori, Tomohiko; Koyama, Yukinobu; Hori, Tomoaki; Abe, Shuji; Tanaka, Yoshimasa; Shinbori, Atsuki; UeNo, Satoru; Sato, Yuka; Yagi, Manabu

    2013-04-01

    The upper atmospheric observational study is the area which an international collaboration is crucially important. The Japanese Inter-university Upper atmosphere Global Observation NETwork project (2009-2014), IUGONET, is an inter-university program by the National Institute of Polar Research (NIPR), Tohoku University, Nagoya University, Kyoto University, and Kyushu University to build a database of metadata for ground-based observations of the upper atmosphere. In order to investigate the mechanism of long-term variations in the upper atmosphere, we need to combine various types of in-situ observations and to accelerate data exchange. The IUGONET institutions have been archiving observed data by radars, magnetometers, photometers, radio telescopes, helioscopes, etc. in various altitude layers from the Earth's surface to the Sun. The IUGONET has been developing systems for searching metadata of these observational data, and the metadata database (MDB) has already been operating since 2011. It adopts DSPACE system for registering metadata, and it uses an extension of the SPASE data model of describing metadata, which is widely used format in the upper atmospheric society including that in USA. The European Union project ESPAS (2011-2015) has the same scientific objects with IUGONET, namely it aims to provide an e-science infrastructure for the retrieval and access to space weather relevant data, information and value added services. It integrates 22 partners in European countries. The ESPAS also plans to adopt SPASE model for defining their metadata, but search system is different. Namely, in spite of the similarity of the data model, basic system ideas and techniques of the system and web portal are different between IUGONET and ESPAS. In order to connect the two systems/databases, we are planning to take an ontological method. The SPASE keyword vocabulary, derived from the SPASE data model shall be used as standard for the description of near-earth and space data content and context. The SPASE keyword vocabulary is modeled as Simple Knowledge Organizing System (SKOS) ontology. The SPASE keyword vocabulary also can be reused in domain-related but also cross-domain projects. The implementation of the vocabulary as ontology enables the direct integration into semantic web based structures and applications, such as linked data and the new Information System and Data Center (ISDC) data management system.

  15. Techniques for Efficiently Managing Large Geosciences Data Sets

    NASA Astrophysics Data System (ADS)

    Kruger, A.; Krajewski, W. F.; Bradley, A. A.; Smith, J. A.; Baeck, M. L.; Steiner, M.; Lawrence, R. E.; Ramamurthy, M. K.; Weber, J.; Delgreco, S. A.; Domaszczynski, P.; Seo, B.; Gunyon, C. A.

    2007-12-01

    We have developed techniques and software tools for efficiently managing large geosciences data sets. While the techniques were developed as part of an NSF-Funded ITR project that focuses on making NEXRAD weather data and rainfall products available to hydrologists and other scientists, they are relevant to other geosciences disciplines that deal with large data sets. Metadata, relational databases, data compression, and networking are central to our methodology. Data and derived products are stored on file servers in a compressed format. URLs to, and metadata about the data and derived products are managed in a PostgreSQL database. Virtually all access to the data and products is through this database. Geosciences data normally require a number of processing steps to transform the raw data into useful products: data quality assurance, coordinate transformations and georeferencing, applying calibration information, and many more. We have developed the concept of crawlers that manage this scientific workflow. Crawlers are unattended processes that run indefinitely, and at set intervals query the database for their next assignment. A database table functions as a roster for the crawlers. Crawlers perform well-defined tasks that are, except for perhaps sequencing, largely independent from other crawlers. Once a crawler is done with its current assignment, it updates the database roster table, and gets its next assignment by querying the database. We have developed a library that enables one to quickly add crawlers. The library provides hooks to external (i.e., C-language) compiled codes, so that developers can work and contribute independently. Processes called ingesters inject data into the system. The bulk of the data are from a real-time feed using UCAR/Unidata's IDD/LDM software. An exciting recent development is the establishment of a Unidata HYDRO feed that feeds value-added metadata over the IDD/LDM. Ingesters grab the metadata and populate the PostgreSQL tables. These and other concepts we have developed have enabled us to efficiently manage a 70 Tb (and growing) data weather radar data set.

  16. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  17. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into multiple data infrastructures, as has been demonstrated through EDAC's testing and deployment of metadata into multiple external systems: Data.Gov, the GEOSS Registry, the DataONE network, the DSpace based institutional repository at UNM and semantic mediation systems developed as part of the NASA ACCESS ELSeWEB project. Each of these systems requires valid metadata as a first step, but to make most effective use of the delivered metadata each also has a set of conventions that are specific to the system. This presentation will provide an overview of the underlying metadata management model, the processes and web services that have been developed to automatically generate metadata in a variety of standard formats and highlight some of the specific modifications made to the output metadata content to support the different conventions used by the multiple metadata integration endpoints.

  18. A Digital Broadcast Item (DBI) enabling metadata repository for digital, interactive television (digiTV) feedback channel networks

    NASA Astrophysics Data System (ADS)

    Lugmayr, Artur R.; Mailaparampil, Anurag; Tico, Florina; Kalli, Seppo; Creutzburg, Reiner

    2003-01-01

    Digital television (digiTV) is an additional multimedia environment, where metadata is one key element for the description of arbitrary content. This implies adequate structures for content description, which is provided by XML metadata schemes (e.g. MPEG-7, MPEG-21). Content and metadata management is the task of a multimedia repository, from which digiTV clients - equipped with an Internet connection - can access rich additional multimedia types over an "All-HTTP" protocol layer. Within this research work, we focus on conceptual design issues of a metadata repository for the storage of metadata, accessible from the feedback channel of a local set-top box. Our concept describes the whole heterogeneous life-cycle chain of XML metadata from the service provider to the digiTV equipment, device independent representation of content, accessing and querying the metadata repository, management of metadata related to digiTV, and interconnection of basic system components (http front-end, relational database system, and servlet container). We present our conceptual test configuration of a metadata repository that is aimed at a real-world deployment, done within the scope of the future interaction (fiTV) project at the Digital Media Institute (DMI) Tampere (www.futureinteraction.tv).

  19. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  20. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  1. Management system for the SND experiments

    NASA Astrophysics Data System (ADS)

    Pugachev, K.; Korol, A.

    2017-09-01

    A new management system for the SND detector experiments (at VEPP-2000 collider in Novosibirsk) is developed. We describe here the interaction between a user and the SND databases. These databases contain experiment configuration, conditions and metadata. The new system is designed in client-server architecture. It has several logical layers corresponding to the users roles. A new template engine is created. A web application is implemented using Node.js framework. At the time the application provides: showing and editing configuration; showing experiment metadata and experiment conditions data index; showing SND log (prototype).

  2. METADATA REGISTRY, ISO/IEC 11179

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R K; Buttler, D J

    2008-01-03

    ISO/IEC-11179 is an international standard that documents the standardization and registration of metadata to make data understandable and shareable. This standardization and registration allows for easier locating, retrieving, and transmitting data from disparate databases. The standard defines the how metadata are conceptually modeled and how they are shared among parties, but does not define how data is physically represented as bits and bytes. The standard consists of six parts. Part 1 provides a high-level overview of the standard and defines the basic element of a metadata registry - a data element. Part 2 defines the procedures for registering classification schemesmore » and classifying administered items in a metadata registry (MDR). Part 3 specifies the structure of an MDR. Part 4 specifies requirements and recommendations for constructing definitions for data and metadata. Part 5 defines how administered items are named and identified. Part 6 defines how administered items are registered and assigned an identifier.« less

  3. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    USGS Publications Warehouse

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. The stratified metadatabase design for the NDAMS program is presented in the MS Access file DBDESIGN.mdb and documented with a data dictionary in the NDAMS_DD.mdb file recorded on the CD-ROM. The data dictionary file includes complete documentation of the table names, table descriptions, and information about each of the 419 fields in the database.

  4. Now That We've Found the "Hidden Web," What Can We Do with It?

    ERIC Educational Resources Information Center

    Cole, Timothy W.; Kaczmarek, Joanne; Marty, Paul F.; Prom, Christopher J.; Sandore, Beth; Shreeves, Sarah

    The Open Archives Initiative (OAI) Protocol for Metadata Harvesting (PMH) is designed to facilitate discovery of the "hidden web" of scholarly information, such as that contained in databases, finding aids, and XML documents. OAI-PMH supports standardized exchange of metadata describing items in disparate collections, of such as those…

  5. Information integration for a sky survey by data warehousing

    NASA Astrophysics Data System (ADS)

    Luo, A.; Zhang, Y.; Zhao, Y.

    The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line

  6. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  7. The Genomes OnLine Database (GOLD) v.4: status of genomic and metagenomic projects and their associated metadata

    PubMed Central

    Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A.; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M.; Kyrpides, Nikos C.

    2012-01-01

    The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11 472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond. PMID:22135293

  8. Experiment Management System for the SND Detector

    NASA Astrophysics Data System (ADS)

    Pugachev, K.

    2017-10-01

    We present a new experiment management system for the SND detector at the VEPP-2000 collider (Novosibirsk). An important part to report about is access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. User interaction comes true using web-interface. The server side includes several logical layers: user interface templates; template variables description and initialization; implementation details. The templates are meant to involve as less IT knowledge as possible. Experiment configuration, conditions and metadata are stored in a database. To implement the server side Node.js, a modern JavaScript framework, has been chosen. A new template engine having an interesting feature is designed. A part of the system is put into production. It includes templates dealing with showing and editing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index.

  9. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  10. The Genomes On Line Database (GOLD) in 2009: status of genomic and metagenomic projects and their associated metadata.

    PubMed

    Liolios, Konstantinos; Chen, I-Min A; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M; Kyrpides, Nikos C

    2010-01-01

    The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/

  11. The Genomes OnLine Database (GOLD) v.4: status of genomic and metagenomic projects and their associated metadata.

    PubMed

    Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M; Kyrpides, Nikos C

    2012-01-01

    The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11,472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond.

  12. The Genomes On Line Database (GOLD) in 2009: status of genomic and metagenomic projects and their associated metadata

    PubMed Central

    Liolios, Konstantinos; Chen, I-Min A.; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M.; Kyrpides, Nikos C.

    2010-01-01

    The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/ PMID:19914934

  13. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    NASA Astrophysics Data System (ADS)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  14. Keeping Research Data from the Continental Deep Drilling Programme (KTB) Accessible and Taking First Steps Towards Digital Preservation

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Ulbricht, D.; Conze, R.

    2014-12-01

    The Continental Deep Drilling Programme (KTB) was a scientific drilling project from 1987 to 1995 near Windischeschenbach, Bavaria. The main super-deep borehole reached a depth of 9,101 meters into the Earth's continental crust. The project used the most current equipment for data capture and processing. After the end of the project key data were disseminated through the web portal of the International Continental Scientific Drilling Program (ICDP). The scientific reports were published as printed volumes. As similar projects have also experienced, it becomes increasingly difficult to maintain a data portal over a long time. Changes in software and underlying hardware make a migration of the entire system inevitable. Around 2009 the data presented on the ICDP web portal were migrated to the Scientific Drilling Database (SDDB) and published through DataCite using Digital Object Identifiers (DOI) as persistent identifiers. The SDDB portal used a relational database with a complex data model to store data and metadata. A PHP-based Content Management System with custom modifications made it possible to navigate and browse datasets using the metadata and then download datasets. The data repository software eSciDoc allows storing self-contained packages consistent with the OAIS reference model. Each package consists of binary data files and XML-metadata. Using a REST-API the packages can be stored in the eSciDoc repository and can be searched using the XML-metadata. During the last maintenance cycle of the SDDB the data and metadata were migrated into the eSciDoc repository. Discovery metadata was generated following the GCMD-DIF, ISO19115 and DataCite schemas. The eSciDoc repository allows to store an arbitrary number of XML-metadata records with each data object. In addition to descriptive metadata each data object may contain pointers to related materials, such as IGSN-metadata to link datasets to physical specimens, or identifiers of literature interpreting the data. Datasets are presented by XSLT-stylesheet transformation using the stored metadata. The presentation shows several migration cycles of data and metadata, which were driven by aging software systems. Currently the datasets reside as self-contained entities in a repository system that is ready for digital preservation.

  15. Physical Samples and Persistent Identifiers: The Implementation of the International Geo Sample Number (IGSN) Registration Service in CSIRO, Australia

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan

    2016-04-01

    Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the Capricorn Distal Footprints project and the Rock Store.

  16. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    NASA Astrophysics Data System (ADS)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  17. Raising orphans from a metadata morass: A researcher's guide to re-use of public 'omics data.

    PubMed

    Bhandary, Priyanka; Seetharam, Arun S; Arendsee, Zebulun W; Hur, Manhoi; Wurtele, Eve Syrkin

    2018-02-01

    More than 15 petabases of raw RNAseq data is now accessible through public repositories. Acquisition of other 'omics data types is expanding, though most lack a centralized archival repository. Data-reuse provides tremendous opportunity to extract new knowledge from existing experiments, and offers a unique opportunity for robust, multi-'omics analyses by merging metadata (information about experimental design, biological samples, protocols) and data from multiple experiments. We illustrate how predictive research can be accelerated by meta-analysis with a study of orphan (species-specific) genes. Computational predictions are critical to infer orphan function because their coding sequences provide very few clues. The metadata in public databases is often confusing; a test case with Zea mays mRNA seq data reveals a high proportion of missing, misleading or incomplete metadata. This metadata morass significantly diminishes the insight that can be extracted from these data. We provide tips for data submitters and users, including specific recommendations to improve metadata quality by more use of controlled vocabulary and by metadata reviews. Finally, we advocate for a unified, straightforward metadata submission and retrieval system. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    PubMed

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  19. Legacy2Drupal: Conversion of an existing relational oceanographic database to a Drupal 7 CMS

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Maffei, A. R.; Chandler, C. L.; Groman, R. C.

    2011-12-01

    Content Management Systems (CMSs) such as Drupal provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have already designed and implemented customized schemas for their metadata. The NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) has ported an existing relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal 7 Content Management System. This is an update on an effort described as a proof-of-concept in poster IN21B-1051, presented at AGU2009. The BCO-DMO project has translated all the existing database tables, input forms, website reports, and other features present in the existing system into Drupal CMS features. The replacement features are made possible by the use of Drupal content types, CCK node-reference fields, a custom theme, and a number of other supporting modules. This presentation describes the process used to migrate content in the original BCO-DMO metadata database to Drupal 7, some problems encountered during migration, and the modules used to migrate the content successfully. Strategic use of Drupal 7 CMS features that enable three separate but complementary interfaces to provide access to oceanographic research metadata will also be covered: 1) a Drupal 7-powered user front-end; 2) REST-ful JSON web services (providing a Mapserver interface to the metadata and data; and 3) a SPARQL interface to a semantic representation of the repository metadata (this feeding a new faceted search capability currently under development). The existing BCO-DMO ontology, developed in collaboration with Rensselaer Polytechnic Institute's Tetherless World Constellation, makes strategic use of pre-existing ontologies and will be used to drive semantically-enabled faceted search capabilities planned for the site. At this point, the use of semantic technologies included in the Drupal 7 core is anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 7 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO and other science data repositories for interoperability between systems that serve ecosystem research data.

  20. BioImaging Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  1. Karst database development in Minnesota: Design and data assembly

    USGS Publications Warehouse

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  2. GEOmetadb: powerful alternative search engine for the Gene Expression Omnibus

    PubMed Central

    Zhu, Yuelin; Davis, Sean; Stephens, Robert; Meltzer, Paul S.; Chen, Yidong

    2008-01-01

    The NCBI Gene Expression Omnibus (GEO) represents the largest public repository of microarray data. However, finding data in GEO can be challenging. We have developed GEOmetadb in an attempt to make querying the GEO metadata both easier and more powerful. All GEO metadata records as well as the relationships between them are parsed and stored in a local MySQL database. A powerful, flexible web search interface with several convenient utilities provides query capabilities not available via NCBI tools. In addition, a Bioconductor package, GEOmetadb that utilizes a SQLite export of the entire GEOmetadb database is also available, rendering the entire GEO database accessible with full power of SQL-based queries from within R. Availability: The web interface and SQLite databases available at http://gbnci.abcc.ncifcrf.gov/geo/. The Bioconductor package is available via the Bioconductor project. The corresponding MATLAB implementation is also available at the same website. Contact: yidong@mail.nih.gov PMID:18842599

  3. Ridge 2000 Data Management System

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.

    2005-12-01

    Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.

  4. The Global Streamflow Indices and Metadata archive (G-SIM): A compilation of global streamflow time series indices and meta-data

    NASA Astrophysics Data System (ADS)

    Do, Hong; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth; Senerivatne, Sonia

    2017-04-01

    In-situ observations of daily streamflow with global coverage are a crucial asset for understanding large-scale freshwater resources which are an essential component of the Earth system and a prerequisite for societal development. Here we present the Global Streamflow Indices and Metadata archive (G-SIM), a collection indices derived from more than 20,000 daily streamflow time series across the globe. These indices are designed to support global assessments of change in wet and dry extremes, and have been compiled from 12 free-to-access online databases (seven national databases and five international collections). The G-SIM archive also includes significant metadata to help support detailed understanding of streamflow dynamics, with the inclusion of drainage area shapefile and many essential catchment properties such as land cover type, soil and topographic characteristics. The automated procedure in data handling and quality control of the project makes G-SIM a reproducible, extendible archive and can be utilised for many purposes in large-scale hydrology. Some potential applications include the identification of observational trends in hydrological extremes, the assessment of climate change impacts on streamflow regimes, and the validation of global hydrological models.

  5. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    NetCDF4 has become the dominant generic format for many forms of geoscientific data, leveraging (and constraining) the versatile HDF5 container format, while providing metadata conventions for interoperability. However, the encapsulation of detailed metadata within each file can lead to metadata "bloat", and difficulty in maintaining consistency where metadata is replicated to multiple locations. Complex conceptual relationships are also difficult to represent in simple key-value netCDF metadata. Linked Data provides a practical mechanism to address these issues by associating the netCDF files and their internal variables with complex metadata stored in Semantic Web vocabularies and ontologies, while complying with and complementing existing metadata conventions. One of the stated objectives of the netCDF4/HDF5 formats is that they should be self-describing: containing metadata sufficient for cataloguing and using the data. However, this objective can be regarded as only partially-met where details of conventions and definitions are maintained externally to the data files. For example, one of the most widely used netCDF community standards, the Climate and Forecasting (CF) Metadata Convention, maintains standard vocabularies for a broad range of disciplines across the geosciences, but this metadata is currently neither readily discoverable nor machine-readable. We have previously implemented useful Linked Data and netCDF tooling (ncskos) that associates netCDF files, and individual variables within those files, with concepts in vocabularies formulated using the Simple Knowledge Organization System (SKOS) ontology. NetCDF files contain Uniform Resource Identifier (URI) links to terms represented as SKOS Concepts, rather than plain-text representations of those terms, so we can use simple, standardised web queries to collect and use rich metadata for the terms from any Linked Data-presented SKOS vocabulary. Geoscience Australia (GA) manages a large volume of diverse geoscientific data, much of which is being translated from proprietary formats to netCDF at NCI Australia. This data is made available through the NCI National Environmental Research Data Interoperability Platform (NERDIP) for programmatic access and interdisciplinary analysis. The netCDF files contain both scientific data variables (e.g. gravity, magnetic or radiometric values), but also domain-specific operational values (e.g. specific instrument parameters) best described fully in formal vocabularies. Our ncskos codebase provides access to multiple stores of detailed external metadata in a standardised fashion. Geophysical datasets are generated from a "survey" event, and GA maintains corporate databases of all surveys and their associated metadata. It is impractical to replicate the full source survey metadata into each netCDF dataset so, instead, we link the netCDF files to survey metadata using public Linked Data URIs. These URIs link to Survey class objects which we model as a subclass of Activity objects as defined by the PROV Ontology, and we provide URI resolution for them via a custom Linked Data API which draws current survey metadata from GA's in-house databases. We have demonstrated that Linked Data is a practical way to associate netCDF data with detailed, external metadata. This allows us to ensure that catalogued metadata is kept consistent with metadata points-of-truth, and we can infer complex conceptual relationships not possible with netCDF key-value attributes alone.

  6. A Python object-oriented framework for the CMS alignment and calibration data

    NASA Astrophysics Data System (ADS)

    Dawes, Joshua H.; CMS Collaboration

    2017-10-01

    The Alignment, Calibrations and Databases group at the CMS Experiment delivers Alignment and Calibration Conditions Data to a large set of workflows which process recorded event data and produce simulated events. The current infrastructure for releasing and consuming Conditions Data was designed in the two years of the first LHC long shutdown to respond to use cases from the preceding data-taking period. During the second run of the LHC, new use cases were defined. For the consumption of Conditions Metadata, no common interface existed for the detector experts to use in Python-based custom scripts, resulting in many different querying and transaction management patterns. A new framework has been built to address such use cases: a simple object-oriented tool that detector experts can use to read and write Conditions Metadata when using Oracle and SQLite databases, that provides a homogeneous method of querying across all services. The tool provides mechanisms for segmenting large sets of conditions while releasing them to the production database, allows for uniform error reporting to the client-side from the server-side and optimizes the data transfer to the server. The architecture of the new service has been developed exploiting many of the features made available by the metadata consumption framework to implement the required improvements. This paper presents the details of the design and implementation of the new metadata consumption and data upload framework, as well as analyses of the new upload service’s performance as the server-side state varies.

  7. Development of an oil spill information system combining remote sensing data and surveillance metadata

    NASA Astrophysics Data System (ADS)

    Tufte, Lars; Trieschmann, Olaf; Carreau, Philippe; Hunsaenger, Thomas; Clayton, Peter J. S.; Barjenbruch, Ulrich

    2004-02-01

    The detection of accidentally or illegal marine oil discharges in the German territorial waters of the North Sea and Baltic Sea is of great importance for combating of oil spills and protection of the marine ecosystem. Therefore the German Federal Ministry of Transport set up an airborne surveillance system consisting of two Dornier DO 228-212 aircrafts equipped with a Side-Looking Airborne Radar (SLAR), a IR/UV sensor, a Microwave Radiometer (MWR) for quantification and a Laser-Flurosensor (LFS) for classification purposes of the oil spills. The flight parameters and the remote sensing data are stored in a database during the flight. A Pollution Observation Log is completed by the operator consisting of information about the detected oil spill (e.g. position, length, width) and several other information about the flight (e.g. name of navigator, name of observer). The objective was to develop an oil spill information system which integrates the described data, metadata and includes visualization and spatial analysis capabilities. The metadata are essential for further statistical analysis in spatial and temporal domains of oil spill occurrences and of the surveillance itself. It should facilitate the communication and distribution of metadata between the administrative bodies and partners of the German oil spill surveillance system. A connection between a GIS and the database allows to use the powerful visualization and spatial analysis functionality of the GIS in conjunction with the oil spill database.

  8. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher,D

    Concerns about the long-term viability of SFS as the metadata store for HPSS have been increasing. A concern that Transarc may discontinue support for SFS motivates us to consider alternative means to store HPSS metadata. The obvious alternative is a commercial database. Commercial databases have the necessary characteristics for storage of HPSS metadata records. They are robust and scalable and can easily accommodate the volume of data that must be stored. They provide programming interfaces, transactional semantics and a full set of maintenance and performance enhancement tools. A team was organized within the HPSS project to study and recommend anmore » approach for the replacement of SFS. Members of the team are David Fisher, Jim Minton, Donna Mecozzi, Danny Cook, Bart Parliman and Lynn Jones. We examined several possible solutions to the problem of replacing SFS, and recommended on May 22, 2000, in a report to the HPSS Technical and Executive Committees, to change HPSS into a database application over either Oracle or DB2. We recommended either Oracle or DB2 on the basis of market share and technical suitability. Oracle and DB2 are dominant offerings in the market, and it is in the best interest of HPSS to use a major player's product. Both databases provide a suitable programming interface. Transaction management functions, support for multi-threaded clients and data manipulation languages (DML) are available. These findings were supported in meetings held with technical experts from both companies. In both cases, the evidence indicated that either database would provide the features needed to host HPSS.« less

  10. GCE Data Toolbox for MATLAB - a software framework for automating environmental data processing, quality control and documentation

    NASA Astrophysics Data System (ADS)

    Sheldon, W.; Chamblee, J.; Cary, R. H.

    2013-12-01

    Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.

  11. Dynamic publication model for neurophysiology databases.

    PubMed

    Gardner, D; Abato, M; Knuth, K H; DeBellis, R; Erde, S M

    2001-08-29

    We have implemented a pair of database projects, one serving cortical electrophysiology and the other invertebrate neurones and recordings. The design for each combines aspects of two proven schemes for information interchange. The journal article metaphor determined the type, scope, organization and quantity of data to comprise each submission. Sequence databases encouraged intuitive tools for data viewing, capture, and direct submission by authors. Neurophysiology required transcending these models with new datatypes. Time-series, histogram and bivariate datatypes, including illustration-like wrappers, were selected by their utility to the community of investigators. As interpretation of neurophysiological recordings depends on context supplied by metadata attributes, searches are via visual interfaces to sets of controlled-vocabulary metadata trees. Neurones, for example, can be specified by metadata describing functional and anatomical characteristics. Permanence is advanced by data model and data formats largely independent of contemporary technology or implementation, including Java and the XML standard. All user tools, including dynamic data viewers that serve as a virtual oscilloscope, are Java-based, free, multiplatform, and distributed by our application servers to any contemporary networked computer. Copyright is retained by submitters; viewer displays are dynamic and do not violate copyright of related journal figures. Panels of neurophysiologists view and test schemas and tools, enhancing community support.

  12. Correcting Inconsistencies and Errors in Bacterial Genome Metadata Using an Automated Curation Tool in Excel (AutoCurE).

    PubMed

    Schmedes, Sarah E; King, Jonathan L; Budowle, Bruce

    2015-01-01

    Whole-genome data are invaluable for large-scale comparative genomic studies. Current sequencing technologies have made it feasible to sequence entire bacterial genomes with relative ease and time with a substantially reduced cost per nucleotide, hence cost per genome. More than 3,000 bacterial genomes have been sequenced and are available at the finished status. Publically available genomes can be readily downloaded; however, there are challenges to verify the specific supporting data contained within the download and to identify errors and inconsistencies that may be present within the organizational data content and metadata. AutoCurE, an automated tool for bacterial genome database curation in Excel, was developed to facilitate local database curation of supporting data that accompany downloaded genomes from the National Center for Biotechnology Information. AutoCurE provides an automated approach to curate local genomic databases by flagging inconsistencies or errors by comparing the downloaded supporting data to the genome reports to verify genome name, RefSeq accession numbers, the presence of archaea, BioProject/UIDs, and sequence file descriptions. Flags are generated for nine metadata fields if there are inconsistencies between the downloaded genomes and genomes reports and if erroneous or missing data are evident. AutoCurE is an easy-to-use tool for local database curation for large-scale genome data prior to downstream analyses.

  13. Principles of metadata organization at the ENCODE data coordination center

    PubMed Central

    Hong, Eurie L.; Sloan, Cricket A.; Chan, Esther T.; Davidson, Jean M.; Malladi, Venkat S.; Strattan, J. Seth; Hitz, Benjamin C.; Gabdank, Idan; Narayanan, Aditi K.; Ho, Marcus; Lee, Brian T.; Rowe, Laurence D.; Dreszer, Timothy R.; Roe, Greg R.; Podduturi, Nikhil R.; Tanaka, Forrest; Hilton, Jason A.; Cherry, J. Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org PMID:26980513

  14. A practical implementation for a data dictionary in an environment of diverse data sets

    USGS Publications Warehouse

    Sprenger, Karla K.; Larsen, Dana M.

    1993-01-01

    The need for a data dictionary database at the U.S. Geological Survey's EROS Data Center (EDC) was reinforced with the Earth Observing System Data and Information System (EOSDIS) requirement for consistent field definitions of data sets residing at more than one archive center. The EDC requirement addresses the existence of multiple sets with identical field definitions using various naming conventions. The EDC is developing a data dictionary database to accomplish the following foals: to standardize field names for ease in software development; to facilitate querying and updating of the date; and to generate ad hoc reports. The structure of the EDC electronic data dictionary database supports different metadata systems as well as many different data sets. A series of reports is used to keep consistency among data sets and various metadata systems.

  15. Development of an Integrated Biospecimen Database among the Regional Biobanks in Korea.

    PubMed

    Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun

    2016-04-01

    This study developed an integrated database for 15 regional biobanks that provides large quantities of high-quality bio-data to researchers to be used for the prevention of disease, for the development of personalized medicines, and in genetics studies. We collected raw data, managed independently by 15 regional biobanks, for database modeling and analyzed and defined the metadata of the items. We also built a three-step (high, middle, and low) classification system for classifying the item concepts based on the metadata. To generate clear meanings of the items, clinical items were defined using the Systematized Nomenclature of Medicine Clinical Terms, and specimen items were defined using the Logical Observation Identifiers Names and Codes. To optimize database performance, we set up a multi-column index based on the classification system and the international standard code. As a result of subdividing 7,197,252 raw data items collected, we refined the metadata into 1,796 clinical items and 1,792 specimen items. The classification system consists of 15 high, 163 middle, and 3,588 low class items. International standard codes were linked to 69.9% of the clinical items and 71.7% of the specimen items. The database consists of 18 tables based on a table from MySQL Server 5.6. As a result of the performance evaluation, the multi-column index shortened query time by as much as nine times. The database developed was based on an international standard terminology system, providing an infrastructure that can integrate the 7,197,252 raw data items managed by the 15 regional biobanks. In particular, it resolved the inevitable interoperability issues in the exchange of information among the biobanks, and provided a solution to the synonym problem, which arises when the same concept is expressed in a variety of ways.

  16. TR32DB - Management of Research Data in a Collaborative, Interdisciplinary Research Project

    NASA Astrophysics Data System (ADS)

    Curdt, Constanze; Hoffmeister, Dirk; Waldhoff, Guido; Lang, Ulrich; Bareth, Georg

    2015-04-01

    The management of research data in a well-structured and documented manner is essential in the context of collaborative, interdisciplinary research environments (e.g. across various institutions). Consequently, set-up and use of a research data management (RDM) system like a data repository or project database is necessary. These systems should accompany and support scientists during the entire research life cycle (e.g. data collection, documentation, storage, archiving, sharing, publishing) and operate cross-disciplinary in interdisciplinary research projects. Challenges and problems of RDM are well-know. Consequently, the set-up of a user-friendly, well-documented, sustainable RDM system is essential, as well as user support and further assistance. In the framework of the Transregio Collaborative Research Centre 32 'Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation' (CRC/TR32), funded by the German Research Foundation (DFG), a RDM system was self-designed and implemented. The CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). Very heterogeneous research data are considered, which are resulting from field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes like publications, conference contributions, PhD reports and corresponding images are regarded. The TR32DB project database is set-up in cooperation with the Regional Computing Centre of the University of Cologne (RRZK) and also located in this hardware environment. The TR32DB system architecture is composed of three main components: (i) a file-based data storage including backup, (ii) a database-based storage for administrative data and metadata, and (iii) a web-interface for user access. The TR32DB offers common features of RDM systems. These include data storage, entry of corresponding metadata by a user-friendly input wizard, search and download of data depending on user permission, as well as secure internal exchange of data. In addition, a Digital Object Identifier (DOI) can be allocated for specific datasets and several web mapping components are supported (e.g. Web-GIS and map search). The centrepiece of the TR32DB is the self-provided and implemented CRC/TR32 specific metadata schema. This enables the documentation of all involved, heterogeneous data with accurate, interoperable metadata. The TR32DB Metadata Schema is set-up in a multi-level approach and supports several metadata standards and schemes (e.g. Dublin Core, ISO 19115, INSPIRE, DataCite). Furthermore, metadata properties with focus on the CRC/TR32 background (e.g. CRC/TR32 specific keywords) and the supported data types are complemented. Mandatory, optional and automatic metadata properties are specified. Overall, the TR32DB is designed and implemented according to the needs of the CRC/TR32 (e.g. huge amount of heterogeneous data) and demands of the DFG (e.g. cooperation with a computing centre). The application of a self-designed, project-specific, interoperable metadata schema enables the accurate documentation of all CRC/TR32 data. The implementation of the TR32DB in the hardware environment of the RRZK ensures the access to the data after the end of the CRC/TR32 funding in 2018.

  17. Passenger baggage object database (PBOD)

    NASA Astrophysics Data System (ADS)

    Gittinger, Jaxon M.; Suknot, April N.; Jimenez, Edward S.; Spaulding, Terry W.; Wenrich, Steve A.

    2018-04-01

    Detection of anomalies of interest in x-ray images is an ever-evolving problem that requires the rapid development of automatic detection algorithms. Automatic detection algorithms are developed using machine learning techniques, which would require developers to obtain the x-ray machine that was used to create the images being trained on, and compile all associated metadata for those images by hand. The Passenger Baggage Object Database (PBOD) and data acquisition application were designed and developed for acquiring and persisting 2-D and 3-D x-ray image data and associated metadata. PBOD was specifically created to capture simulated airline passenger "stream of commerce" luggage data, but could be applied to other areas of x-ray imaging to utilize machine-learning methods.

  18. The Genomic Observatories Metadatabase (GeOMe): A new repository for field and sampling event metadata associated with genetic samples.

    PubMed

    Deck, John; Gaither, Michelle R; Ewing, Rodney; Bird, Christopher E; Davies, Neil; Meyer, Christopher; Riginos, Cynthia; Toonen, Robert J; Crandall, Eric D

    2017-08-01

    The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/) is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information's (NCBI's) Sequence Read Archive (SRA) via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science.

  19. The Genomic Observatories Metadatabase (GeOMe): A new repository for field and sampling event metadata associated with genetic samples

    PubMed Central

    Deck, John; Gaither, Michelle R.; Ewing, Rodney; Bird, Christopher E.; Davies, Neil; Meyer, Christopher; Riginos, Cynthia; Toonen, Robert J.; Crandall, Eric D.

    2017-01-01

    The Genomic Observatories Metadatabase (GeOMe, http://www.geome-db.org/) is an open access repository for geographic and ecological metadata associated with biosamples and genetic data. Whereas public databases have served as vital repositories for nucleotide sequences, they do not accession all the metadata required for ecological or evolutionary analyses. GeOMe fills this need, providing a user-friendly, web-based interface for both data contributors and data recipients. The interface allows data contributors to create a customized yet standard-compliant spreadsheet that captures the temporal and geospatial context of each biosample. These metadata are then validated and permanently linked to archived genetic data stored in the National Center for Biotechnology Information’s (NCBI’s) Sequence Read Archive (SRA) via unique persistent identifiers. By linking ecologically and evolutionarily relevant metadata with publicly archived sequence data in a structured manner, GeOMe sets a gold standard for data management in biodiversity science. PMID:28771471

  20. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  1. Principles of metadata organization at the ENCODE data coordination center.

    PubMed

    Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org. © The Author(s) 2016. Published by Oxford University Press.

  2. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  3. A publication database for optical long baseline interferometry

    NASA Astrophysics Data System (ADS)

    Malbet, Fabien; Mella, Guillaume; Lawson, Peter; Taillifet, Esther; Lafrasse, Sylvain

    2010-07-01

    Optical long baseline interferometry is a technique that has generated almost 850 refereed papers to date. The targets span a large variety of objects from planetary systems to extragalactic studies and all branches of stellar physics. We have created a database hosted by the JMMC and connected to the Optical Long Baseline Interferometry Newsletter (OLBIN) web site using MySQL and a collection of XML or PHP scripts in order to store and classify these publications. Each entry is defined by its ADS bibcode, includes basic ADS informations and metadata. The metadata are specified by tags sorted in categories: interferometric facilities, instrumentation, wavelength of operation, spectral resolution, type of measurement, target type, and paper category, for example. The whole OLBIN publication list has been processed and we present how the database is organized and can be accessed. We use this tool to generate statistical plots of interest for the community in optical long baseline interferometry.

  4. EarthRef.org: Exploring aspects of a Cyber Infrastructure in Earth Science and Education

    NASA Astrophysics Data System (ADS)

    Staudigel, H.; Koppers, A.; Tauxe, L.; Constable, C.; Helly, J.

    2004-12-01

    EarthRef.org is the common host and (co-) developer of a range of earth science databases and IT resources providing a test bed for a Cyberinfrastructure in Earth Science and Education (CIESE). EarthRef.org data base efforts include in particular the Geochemical Earth Reference Model (GERM), the Magnetics Information Consortium (MagIC), the Educational Resources for Earth Science Education (ERESE) project, the Seamount Catalog, the Mid-Ocean Ridge Catalog, the Radio-Isotope Geochronology (RiG) initiative for CHRONOS, and the Microbial Observatory for Fe oxidizing microbes on Loihi Seamount (FeMO; the most recent development). These diverse databases are developed under a single database umbrella and webserver at the San Diego Supercomputing Center. All the data bases have similar structures, with consistent metadata concepts, a common database layout, and automated upload wizards. Shared resources include supporting databases like an address book, a reference/publication catalog, and a common digital archive making database development and maintenance cost-effective, while guaranteeing interoperability. The EarthRef.org CIESE provides a common umbrella for synthesis information as well as sample-based data, and it bridges the gap between science and science education in middle and high schools, validating the potential for a system wide data infrastructure in a CIESE. EarthRef.org experiences have shown that effective communication with the respective communities is a key part of a successful CIESE facilitating both utility and community buy-in. GERM has been particularly successful at developing a metadata scheme for geochemistry and in the development of a new electronic journal (G-cubed) that has made much progress in data publication and linkages between journals and community data bases. GERM also has worked, through editors and publishers, towards interfacing databases with the publication process, to accomplish a more scholarly and database friendly data publication environment, and to interface with the respective science communities. MagIC has held several workshops that have resulted in an integrated data archival environment using metadata that are interchangeable with the geochemical metadata. MagIC archives a wide array of paleo and rock magnetic directional, intensity and magnetic property data as well as integrating computational tools. ERESE brought together librarians, teachers, and scientists to create an educational environment that supports inquiry driven education and the use of science data. Experiences in EarthRef.org demonstrates the feasibility of an effective, community wide CIESE for data publication, archival and modeling, as well as the outreach to the educational community.

  5. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    NASA Astrophysics Data System (ADS)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  6. Evolution of the use of relational and NoSQL databases in the ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Barberis, D.

    2016-09-01

    The ATLAS experiment used for many years a large database infrastructure based on Oracle to store several different types of non-event data: time-dependent detector configuration and conditions data, calibrations and alignments, configurations of Grid sites, catalogues for data management tools, job records for distributed workload management tools, run and event metadata. The rapid development of "NoSQL" databases (structured storage services) in the last five years allowed an extended and complementary usage of traditional relational databases and new structured storage tools in order to improve the performance of existing applications and to extend their functionalities using the possibilities offered by the modern storage systems. The trend is towards using the best tool for each kind of data, separating for example the intrinsically relational metadata from payload storage, and records that are frequently updated and benefit from transactions from archived information. Access to all components has to be orchestrated by specialised services that run on front-end machines and shield the user from the complexity of data storage infrastructure. This paper describes this technology evolution in the ATLAS database infrastructure and presents a few examples of large database applications that benefit from it.

  7. NIST Gas Hydrate Research Database and Web Dissemination Channel.

    PubMed

    Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D

    2010-01-01

    To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.

  8. Reflecting on the challenges of building a rich interconnected metadata database to describe the experiments of phase six of the coupled climate model intercomparison project (CMIP6) for the Earth System Documentation Project (ES-DOC) and anticipating the opportunities that tooling and services based on rich metadata can provide.

    NASA Astrophysics Data System (ADS)

    Pascoe, C. L.

    2017-12-01

    The Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modelling teams since 1995. This has led to a better understanding of past, present, and future climate. The 2017 sixth phase of the CMIP process (CMIP6) consists of a suite of common experiments, and 21 separate CMIP-Endorsed Model Intercomparison Projects (MIPs) making a total of 244 separate experiments. Precise descriptions of the suite of CMIP6 experiments have been captured in a Common Information Model (CIM) database by the Earth System Documentation Project (ES-DOC). The database contains descriptions of forcings, model configuration requirements, ensemble information and citation links, as well as text descriptions and information about the rationale for each experiment. The database was built from statements about the experiments found in the academic literature, the MIP submissions to the World Climate Research Programme (WCRP), WCRP summary tables and correspondence with the principle investigators for each MIP. The database was collated using spreadsheets which are archived in the ES-DOC Github repository and then rendered on the ES-DOC website. A diagramatic view of the workflow of building the database of experiment metadata for CMIP6 is shown in the attached figure.The CIM provides the formalism to collect detailed information from diverse sources in a standard way across all the CMIP6 MIPs. The ES-DOC documentation acts as a unified reference for CMIP6 information to be used both by data producers and consumers. This is especially important given the federated nature of the CMIP6 project. Because the CIM allows forcing constraints and other experiment attributes to be referred to by more than one experiment, we can streamline the process of collecting information from modelling groups about how they set up their models for each experiment. End users of the climate model archive will be able to ask questions enabled by the interconnectedness of the metadata such as "Which MIPs make use of experiment A?" and "Which experiments use forcing constraint B?".

  9. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  10. JAMSTEC DARWIN Database Assimilates GANSEKI and COEDO

    NASA Astrophysics Data System (ADS)

    Tomiyama, T.; Toyoda, Y.; Horikawa, H.; Sasaki, T.; Fukuda, K.; Hase, H.; Saito, H.

    2017-12-01

    Introduction: Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives data and samples obtained by JAMSTEC research vessels and submersibles. As a common property of the human society, JAMSTEC archive is open for public users with scientific/educational purposes [1]. For publicizing its data and samples online, JAMSTEC is operating NUUNKUI data sites [2], a group of several databases for various data and sample types. For years, data and metadata of JAMSTEC rock samples, sediment core samples and cruise/dive observation were publicized through databases named GANSEKI, COEDO, and DARWIN, respectively. However, because they had different user interfaces and data structures, these services were somewhat confusing for unfamiliar users. Maintenance costs of multiple hardware and software were also problematic for performing sustainable services and continuous improvements. Database Integration: In 2017, GANSEKI, COEDO and DARWIN were integrated into DARWIN+ [3]. The update also included implementation of map-search function as a substitute of closed portal site. Major functions of previous systems were incorporated into the new system; users can perform the complex search, by thumbnail browsing, map area, keyword filtering, and metadata constraints. As for data handling, the new system is more flexible, allowing the entry of variety of additional data types. Data Management: After the DARWIN major update, JAMSTEC data & sample team has been dealing with minor issues of individual sample data/metadata which sometimes need manual modification to be transferred to the new system. Some new data sets, such as onboard sample photos and surface close-up photos of rock samples, are getting available online. Geochemical data of sediment core samples will supposedly be added in the near future. Reference: [1] http://www.jamstec.go.jp/e/database/data_policy.html [2] http://www.godac.jamstec.go.jp/jmedia/portal/e/ [3] http://www.godac.jamstec.go.jp/darwin/e/

  11. Web Based Data Access to the World Data Center for Climate

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Lautenschlager, M.

    2006-12-01

    The World Data Center for Climate (WDC-Climate, www.wdc-climate.de) is hosted by the Model &Data Group (M&D) of the Max Planck Institute for Meteorology. The M&D department is financed by the German government and uses the computers and mass storage facilities of the German Climate Computing Centre (Deutsches Klimarechenzentrum, DKRZ). The WDC-Climate provides web access to 200 Terabytes of climate data; the total mass storage archive contains nearly 4 Petabytes. Although the majority of the datasets concern model output data, some satellite and observational data are accessible as well. The underlying relational database is distributed on five servers. The CERA relational data model is used to integrate catalogue data and mass data. The flexibility of the model allows to store and access very different types of data and metadata. The CERA metadata catalogue provides easy access to the content of the CERA database as well as to other data in the web. Visit ceramodel.wdc-climate.de for additional information on the CERA data model. The majority of the users access data via the CERA metadata catalogue, which is open without registration. However, prior to retrieving data user are required to check in and apply for a userid and password. The CERA metadata catalogue is servlet based. So it is accessible worldwide through any web browser at cera.wdc-climate.de. In addition to data and metadata access by the web catalogue, WDC-Climate offers a number of other forms of web based data access. All metadata are available via http request as xml files in various metadata formats (ISO, DC, etc., see wini.wdc-climate.de) which allows for easy data interchange with other catalogues. Model data can be retrieved in GRIB, ASCII, NetCDF, and binary (IEEE) format. WDC-Climate serves as data centre for various projects. Since xml files are accessible by http, the integration of data into applications of different projects is very easy. Projects supported by WDC-Climate are e.g. CEOP, IPCC, and CARIBIC. A script tool for data download (jblob) is offered on the web page, to make retrieval of huge data quantities more comfortable.

  12. CruiseViewer: SIOExplorer Graphical Interface to Metadata and Archives.

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J. J.; Miller, S. P.; Chase, A.; Clark, D.

    2002-12-01

    We are introducing "CruiseViewer" as a prototype graphical interface for the SIOExplorer digital library project, part of the overall NSF National Science Digital Library (NSDL) effort. When complete, CruiseViewer will provide access to nearly 800 cruises, as well as 100 years of documents and images from the archives of the Scripps Institution of Oceanography (SIO). The project emphasizes data object accessibility, a rich metadata format, efficient uploading methods and interoperability with other digital libraries. The primary function of CruiseViewer is to provide a human interface to the metadata database and to storage systems filled with archival data. The system schema is based on the concept of an "arbitrary digital object" (ADO). Arbitrary in that if the object can be stored on a computer system then SIOExplore can manage it. Common examples are a multibeam swath bathymetry file, a .pdf cruise report, or a tar file containing all the processing scripts used on a cruise. We require a metadata file for every ADO in an ascii "metadata interchange format" (MIF), which has proven to be highly useful for operability and extensibility. Bulk ADO storage is managed using the Storage Resource Broker, SRB, data handling middleware developed at the San Diego Supercomputer Center that centralizes management and access to distributed storage devices. MIF metadata are harvested from several sources and housed in a relational (Oracle) database. For CruiseViewer, cgi scripts resident on an Apache server are the primary communication and service request handling tools. Along with the CruiseViewer java application, users can query, access and download objects via a separate method that operates through standard web browsers, http://sioexplorer.ucsd.edu. Both provide the functionability to query and view object metadata, and select and download ADOs. For the CruiseViewer application Java 2D is used to add a geo-referencing feature that allows users to select basemap images and have vector shapes representing query results mapped over the basemap in the image panel. The two methods together address a wide range of user access needs and will allow for widespread use of SIOExplorer.

  13. GEOMetaCuration: a web-based application for accurate manual curation of Gene Expression Omnibus metadata

    PubMed Central

    Li, Zhao; Li, Jin; Yu, Peng

    2018-01-01

    Abstract Metadata curation has become increasingly important for biological discovery and biomedical research because a large amount of heterogeneous biological data is currently freely available. To facilitate efficient metadata curation, we developed an easy-to-use web-based curation application, GEOMetaCuration, for curating the metadata of Gene Expression Omnibus datasets. It can eliminate mechanical operations that consume precious curation time and can help coordinate curation efforts among multiple curators. It improves the curation process by introducing various features that are critical to metadata curation, such as a back-end curation management system and a curator-friendly front-end. The application is based on a commonly used web development framework of Python/Django and is open-sourced under the GNU General Public License V3. GEOMetaCuration is expected to benefit the biocuration community and to contribute to computational generation of biological insights using large-scale biological data. An example use case can be found at the demo website: http://geometacuration.yubiolab.org. Database URL: https://bitbucket.com/yubiolab/GEOMetaCuration PMID:29688376

  14. Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.

    2015-09-01

    Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.

  15. Automatic meta-data collection of STP observation data

    NASA Astrophysics Data System (ADS)

    Ishikura, S.; Kimura, E.; Murata, K.; Kubo, T.; Shinohara, I.

    2006-12-01

    For the geo-science and the STP (Solar-Terrestrial Physics) studies, various observations have been done by satellites and ground-based observatories up to now. These data are saved and managed at many organizations, but no common procedure and rule to provide and/or share these data files. Researchers have felt difficulty in searching and analyzing such different types of data distributed over the Internet. To support such cross-over analyses of observation data, we have developed the STARS (Solar-Terrestrial data Analysis and Reference System). The STARS consists of client application (STARS-app), the meta-database (STARS- DB), the portal Web service (STARS-WS) and the download agent Web service (STARS DLAgent-WS). The STARS-DB includes directory information, access permission, protocol information to retrieve data files, hierarchy information of mission/team/data and user information. Users of the STARS are able to download observation data files without knowing locations of the files by using the STARS-DB. We have implemented the Portal-WS to retrieve meta-data from the meta-database. One reason we use the Web service is to overcome a variety of firewall restrictions which is getting stricter in recent years. Now it is difficult for the STARS client application to access to the STARS-DB by sending SQL query to obtain meta- data from the STARS-DB. Using the Web service, we succeeded in placing the STARS-DB behind the Portal- WS and prevent from exposing it on the Internet. The STARS accesses to the Portal-WS by sending the SOAP (Simple Object Access Protocol) request over HTTP. Meta-data is received as a SOAP Response. The STARS DLAgent-WS provides clients with data files downloaded from data sites. The data files are provided with a variety of protocols (e.g., FTP, HTTP, FTPS and SFTP). These protocols are individually selected at each site. The clients send a SOAP request with download request messages and receive observation data files as a SOAP Response with DIME-Attachment. By introducing the DLAgent-WS, we overcame the problem that the data management policies of each data site are independent. Another important issue to be overcome is how to collect the meta-data of observation data files. So far, STARS-DB managers have added new records to the meta-database and updated them manually. We have had a lot of troubles to maintain the meta-database because observation data are generated every day and the quantity of data files increases explosively. For that purpose, we have attempted to automate collection of the meta-data. In this research, we adopted the RSS 1.0 (RDF Site Summary) as a format to exchange meta-data in the STP fields. The RSS is an RDF vocabulary that provides a multipurpose extensible meta-data description and is suitable for syndication of meta-data. Most of the data in the present study are described in the CDF (Common Data Format), which is a self- describing data format. We have converted meta-information extracted from the CDF data files into RSS files. The program to generate the RSS files is executed on data site server once a day and the RSS files provide information of new data files. The RSS files are collected by RSS collection server once a day and the meta- data are stored in the STARS-DB.

  16. Managing Heterogeneous Information Systems through Discovery and Retrieval of Generic Concepts.

    ERIC Educational Resources Information Center

    Srinivasan, Uma; Ngu, Anne H. H.; Gedeon, Tom

    2000-01-01

    Introduces a conceptual integration approach to heterogeneous databases or information systems that exploits the similarity in metalevel information and performs metadata mining on database objects to discover a set of concepts that serve as a domain abstraction and provide a conceptual layer above existing legacy systems. Presents results of…

  17. Automated Bulk Uploading of Images and Metadata to Flickr

    ERIC Educational Resources Information Center

    Michel, Jason Paul; Tzoc, Elias

    2010-01-01

    The Digital Initiatives department at Miami University, like most digital initiatives and special collections departments, has a large number of rich digital image collections, stored primarily in a third-party database. Typically, these databases are not findable to the average Web user. From a desire to expose these collections to the wider Web…

  18. Toolsets for Airborne Data (TAD): Customized Data Merging Function

    NASA Astrophysics Data System (ADS)

    Benson, A.; Peeters, M. C.; Perez, J.; Parker, L.; Chen, G.

    2013-12-01

    NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. Prior to the actual toolset development, a comprehensive metadata database was created to compensate for the absence of standardization of the ICARTT data format in which the data is stored. This database tracks the Principal Investigator-provided metadata, and links the measurement variables to a common naming system that was developed as a part of this project. This database is used by the data merging module. Most aircraft data reported during a single flight is not on a consistent time base and is difficult to intercompare. This module provides the user with the ability to merge original data measurements from multiple data providers into a specified time interval or common time base. The database development, common naming scheme and data merge module development will be presented.

  19. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records

    PubMed Central

    Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-01-01

    Objective The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. Materials and Methods We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. Results We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Discussion Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Conclusion Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. PMID:26911818

  20. A high-precision rule-based extraction system for expanding geospatial metadata in GenBank records.

    PubMed

    Tahsin, Tasnia; Weissenbacher, Davy; Rivera, Robert; Beard, Rachel; Firago, Mari; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2016-09-01

    The metadata reflecting the location of the infected host (LOIH) of virus sequences in GenBank often lacks specificity. This work seeks to enhance this metadata by extracting more specific geographic information from related full-text articles and mapping them to their latitude/longitudes using knowledge derived from external geographical databases. We developed a rule-based information extraction framework for linking GenBank records to the latitude/longitudes of the LOIH. Our system first extracts existing geospatial metadata from GenBank records and attempts to improve it by seeking additional, relevant geographic information from text and tables in related full-text PubMed Central articles. The final extracted locations of the records, based on data assimilated from these sources, are then disambiguated and mapped to their respective geo-coordinates. We evaluated our approach on a manually annotated dataset comprising of 5728 GenBank records for the influenza A virus. We found the precision, recall, and f-measure of our system for linking GenBank records to the latitude/longitudes of their LOIH to be 0.832, 0.967, and 0.894, respectively. Our system had a high level of accuracy for linking GenBank records to the geo-coordinates of the LOIH. However, it can be further improved by expanding our database of geospatial data, incorporating spell correction, and enhancing the rules used for extraction. Our system performs reasonably well for linking GenBank records for the influenza A virus to the geo-coordinates of their LOIH based on record metadata and information extracted from related full-text articles. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Sea Level Station Metadata for Tsunami Detection, Warning and Research

    NASA Astrophysics Data System (ADS)

    Stroker, K. J.; Marra, J.; Kari, U. S.; Weinstein, S. A.; Kong, L.

    2007-12-01

    The devastating earthquake and tsunami of December 26, 2004 has greatly increased recognition of the need for water level data both from the coasts and the deep-ocean. In 2006, the National Oceanic and Atmospheric Administration (NOAA) completed a Tsunami Data Management Report describing the management of data required to minimize the impact of tsunamis in the United States. One of the major gaps defined in this report is the access to global coastal water level data. NOAA's National Geophysical Data Center (NGDC) and National Climatic Data Center (NCDC) are working cooperatively to bridge this gap. NOAA relies on a network of global data, acquired and processed in real-time to support tsunami detection and warning, as well as high-quality global databases of archived data to support research and advanced scientific modeling. In 2005, parties interested in enhancing the access and use of sea level station data united under the NOAA NCDC's Integrated Data and Environmental Applications (IDEA) Center's Pacific Region Integrated Data Enterprise (PRIDE) program to develop a distributed metadata system describing sea level stations (Kari et. al., 2006; Marra et.al., in press). This effort started with pilot activities in a regional framework and is targeted at tsunami detection and warning systems being developed by various agencies. It includes development of the components of a prototype sea level station metadata web service and accompanying Google Earth-based client application, which use an XML-based schema to expose, at a minimum, information in the NOAA National Weather Service (NWS) Pacific Tsunami Warning Center (PTWC) station database needed to use the PTWC's Tide Tool application. As identified in the Tsunami Data Management Report, the need also exists for long-term retention of the sea level station data. NOAA envisions that the retrospective water level data and metadata will also be available through web services, using an XML-based schema. Five high-priority metadata requirements identified at a water level workshop held at the XXIV IUGG Meeting in Perugia will be addressed: consistent, validated, and well defined numbers (e.g. amplitude); exact location of sea level stations; a complete record of sea level data stored in the archive; identifying high-priority sea level stations; and consistent definitions. NOAA's National Geophysical Data Center (NGDC) and co-located World Data Center for Solid Earth Geophysics (including tsunamis) would hold the archive of the sea level station data and distribute the standard metadata. Currently, NGDC is also archiving and distributing the DART buoy deep-ocean water level data and metadata in standards based formats. Kari, Uday S., John J. Marra, Stuart A. Weinstein, 2006 A Tsunami Focused Data Sharing Framework For Integration of Databases that Describe Water Level Station Specifications. AGU Fall Meeting, 2006. San Francisco, California. Marra, John, J., Uday S. Kari, and Stuart A. Weinstein (in press). A Tsunami Detection and Warning-focused Sea Level Station Metadata Web Service. IUGG XXIV, July 2-13, 2007. Perugia, Italy.

  2. The TOAR database on observations of surface ozone (and more)

    NASA Astrophysics Data System (ADS)

    Schultz, M. G.; Schröder, S.; Cooper, O. R.; Galbally, I. E.; Petropavlovskikh, I. V.; von Schneidemesser, E.; Tanimoto, H.; Elshorbany, Y. F.; Naja, M. K.; Seguel, R. J.

    2017-12-01

    In support of the first Tropospheric Ozone Assessment Report (TOAR) a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues. Cooperation among many data centers and individual researchers worldwide made it possible to build the world's largest collection of in-situ hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. This presentation will provide a summary of the TOAR surface observations database including recent additions of ozone precursor and meteorological data. We will demonstrate how the database can be accessed and the data can be used, and we will discuss its limitations and the potential for closing some of teh remaining data gaps.

  3. A future Outlook: Web based Simulation of Hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Islam, A. S.; Piasecki, M.

    2003-12-01

    Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML

  4. Primary Multimedia Objects and 'Educational Metadata' A Fundamental Dilemma for Developers of Multimedia Archives; Evaluation of Digital Library Impact and User Communities by Analysis of Usage Patterns; The KYVL Kentuckiana Digital Library Project: Background and Current Status; DPDx Collection.

    ERIC Educational Resources Information Center

    Shabajee, Paul; Bollen, Johan; Luce, Rick; Weig, Eric

    2002-01-01

    Includes four articles that discuss multimedia educational database systems and the use of metadata, including repurposing; the evaluation of digital library use that analyzes the retrieval habits of users; the Kentucky Virtual Library (KYVL) and digital collection project; and the collection of the Division of Parasitic Diseases, Centers for…

  5. CellML metadata standards, associated tools and repositories

    PubMed Central

    Beard, Daniel A.; Britten, Randall; Cooling, Mike T.; Garny, Alan; Halstead, Matt D.B.; Hunter, Peter J.; Lawson, James; Lloyd, Catherine M.; Marsh, Justin; Miller, Andrew; Nickerson, David P.; Nielsen, Poul M.F.; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M.; Yu, Tommy

    2009-01-01

    The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website. PMID:19380315

  6. The Footprint Database and Web Services of the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kogalovskii, M.R.

    This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.

  8. GLIMS Glacier Database: Status and Challenges

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Racoviteanu, A.; Khalsa, S. S.; Armstrong, R.

    2008-12-01

    GLIMS (Global Land Ice Measurements from Space) is an international initiative to map the world's glaciers and to build a GIS database that is usable via the World Wide Web. The GLIMS programme includes 70 institutions, and 25 Regional Centers (RCs), who analyze satellite imagery to map glaciers in their regions of expertise. The analysis results are collected at the National Snow and Ice Data Center (NSIDC) and ingested into the GLIMS Glacier Database. The database contains approximately 80 000 glacier outlines, half the estimated total on Earth. In addition, the database contains metadata on approximately 200 000 ASTER images acquired over glacierized terrain. Glacier data and the ASTER metadata can be viewed and searched via interactive maps at http://glims.org/. As glacier mapping with GLIMS has progressed, various hurdles have arisen that have required solutions. For example, the GLIMS community has formulated definitions for how to delineate glaciers with different complicated morphologies and how to deal with debris cover. Experiments have been carried out to assess the consistency of the database, and protocols have been defined for the RCs to follow in their mapping. Hurdles still remain. In June 2008, a workshop was convened in Boulder, Colorado to address issues such as mapping debris-covered glaciers, mapping ice divides, and performing change analysis using two different glacier inventories. This contribution summarizes the status of the GLIMS Glacier Database and steps taken to ensure high data quality.

  9. Stability assessment of structures under earthquake hazard through GRID technology

    NASA Astrophysics Data System (ADS)

    Prieto Castrillo, F.; Boton Fernandez, M.

    2009-04-01

    This work presents a GRID framework to estimate the vulnerability of structures under earthquake hazard. The tool has been designed to cover the needs of a typical earthquake engineering stability analysis; preparation of input data (pre-processing), response computation and stability analysis (post-processing). In order to validate the application over GRID, a simplified model of structure under artificially generated earthquake records has been implemented. To achieve this goal, the proposed scheme exploits the GRID technology and its main advantages (parallel intensive computing, huge storage capacity and collaboration analysis among institutions) through intensive interaction among the GRID elements (Computing Element, Storage Element, LHC File Catalogue, federated database etc.) The dynamical model is described by a set of ordinary differential equations (ODE's) and by a set of parameters. Both elements, along with the integration engine, are encapsulated into Java classes. With this high level design, subsequent improvements/changes of the model can be addressed with little effort. In the procedure, an earthquake record database is prepared and stored (pre-processing) in the GRID Storage Element (SE). The Metadata of these records is also stored in the GRID federated database. This Metadata contains both relevant information about the earthquake (as it is usual in a seismic repository) and also the Logical File Name (LFN) of the record for its later retrieval. Then, from the available set of accelerograms in the SE, the user can specify a range of earthquake parameters to carry out a dynamic analysis. This way, a GRID job is created for each selected accelerogram in the database. At the GRID Computing Element (CE), displacements are then obtained by numerical integration of the ODE's over time. The resulting response for that configuration is stored in the GRID Storage Element (SE) and the maximum structure displacement is computed. Then, the corresponding Metadata containing the response LFN, earthquake magnitude and maximum structure displacement is also stored. Finally, the displacements are post-processed through a statistically-based algorithm from the available Metadata to obtain the probability of collapse of the structure for different earthquake magnitudes. From this study, it is possible to build a vulnerability report for the structure type and seismic data. The proposed methodology can be combined with the on-going initiatives to build a European earthquake record database. In this context, Grid enables collaboration analysis over shared seismic data and results among different institutions.

  10. Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.

    ERIC Educational Resources Information Center

    Johnson, Ian

    TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…

  11. The Challenges in Metadata Management: 20+ Years of ESO Data

    NASA Astrophysics Data System (ADS)

    Vera, I.; Da Rocha, C.; Dobrzycki, A.; Micol, A.; Vuong, M.

    2015-09-01

    The European Southern Observatory Science Archive Facility has been in operations for more than 20 years. It contains data produced by ESO telescopes as well as the metadata needed for characterizing and distributing those data. This metadata is used to build the different archive services provided by the Archive. Over these years, services have been added, modified or even decommissioned creating a cocktail of new, evolved and legacy data systems. The challenge for the Archive is to harmonize the differences of those data systems to provide the community with a homogeneous experience when using ESO data. In this paper, we present ESO experience in three particular challenging areas. First discussion is dedicated to the problem of metadata quality over the time, second discusses how to integrate obsolete data models on the current services and finally we will present the challenges of ever growing databases. We describe our experience dealing with those issues and the solutions adopted to mitigate them.

  12. Towards a semantic PACS: Using Semantic Web technology to represent imaging data.

    PubMed

    Van Soest, Johan; Lustberg, Tim; Grittner, Detlef; Marshall, M Scott; Persoon, Lucas; Nijsten, Bas; Feltens, Peter; Dekker, Andre

    2014-01-01

    The DICOM standard is ubiquitous within medicine. However, improved DICOM semantics would significantly enhance search operations. Furthermore, databases of current PACS systems are not flexible enough for the demands within image analysis research. In this paper, we investigated if we can use Semantic Web technology, to store and represent metadata of DICOM image files, as well as linking additional computational results to image metadata. Therefore, we developed a proof of concept containing two applications: one to store commonly used DICOM metadata in an RDF repository, and one to calculate imaging biomarkers based on DICOM images, and store the biomarker values in an RDF repository. This enabled us to search for all patients with a gross tumor volume calculated to be larger than 50 cc. We have shown that we can successfully store the DICOM metadata in an RDF repository and are refining our proof of concept with regards to volume naming, value representation, and the applications themselves.

  13. ClinData Express – A Metadata Driven Clinical Research Data Management System for Secondary Use of Clinical Data

    PubMed Central

    Li, Zuofeng; Wen, Jingran; Zhang, Xiaoyan; Wu, Chunxiao; Li, Zuogao; Liu, Lei

    2012-01-01

    Aim to ease the secondary use of clinical data in clinical research, we introduce a metadata driven web-based clinical data management system named ClinData Express. ClinData Express is made up of two parts: 1) m-designer, a standalone software for metadata definition; 2) a web based data warehouse system for data management. With ClinData Express, what the researchers need to do is to define the metadata and data model in the m-designer. The web interface for data collection and specific database for data storage will be automatically generated. The standards used in the system and the data export modular make sure of the data reuse. The system has been tested on seven disease-data collection in Chinese and one form from dbGap. The flexibility of system makes its great potential usage in clinical research. The system is available at http://code.google.com/p/clindataexpress. PMID:23304327

  14. Application Analysis and Decision with Dynamic Analysis

    DTIC Science & Technology

    2014-12-01

    pushes the application file and the JSON file containing the metadata from the database . When the 2 files are in place, the consumer thread starts...human analysts and stores it in a database . It would then use some of these data to generate a risk score for the application. However, static analysis...and store them in the primary A2D database for future analysis. 15. SUBJECT TERMS Android, dynamic analysis 16. SECURITY CLASSIFICATION OF: 17

  15. New concepts for building vocabulary for cell image ontologies.

    PubMed

    Plant, Anne L; Elliott, John T; Bhat, Talapady N

    2011-12-21

    There are significant challenges associated with the building of ontologies for cell biology experiments including the large numbers of terms and their synonyms. These challenges make it difficult to simultaneously query data from multiple experiments or ontologies. If vocabulary terms were consistently used and reused across and within ontologies, queries would be possible through shared terms. One approach to achieving this is to strictly control the terms used in ontologies in the form of a pre-defined schema, but this approach limits the individual researcher's ability to create new terms when needed to describe new experiments. Here, we propose the use of a limited number of highly reusable common root terms, and rules for an experimentalist to locally expand terms by adding more specific terms under more general root terms to form specific new vocabulary hierarchies that can be used to build ontologies. We illustrate the application of the method to build vocabularies and a prototype database for cell images that uses a visual data-tree of terms to facilitate sophisticated queries based on a experimental parameters. We demonstrate how the terminology might be extended by adding new vocabulary terms into the hierarchy of terms in an evolving process. In this approach, image data and metadata are handled separately, so we also describe a robust file-naming scheme to unambiguously identify image and other files associated with each metadata value. The prototype database http://sbd.nist.gov/ consists of more than 2000 images of cells and benchmark materials, and 163 metadata terms that describe experimental details, including many details about cell culture and handling. Image files of interest can be retrieved, and their data can be compared, by choosing one or more relevant metadata values as search terms. Metadata values for any dataset can be compared with corresponding values of another dataset through logical operations. Organizing metadata for cell imaging experiments under a framework of rules that include highly reused root terms will facilitate the addition of new terms into a vocabulary hierarchy and encourage the reuse of terms. These vocabulary hierarchies can be converted into XML schema or RDF graphs for displaying and querying, but this is not necessary for using it to annotate cell images. Vocabulary data trees from multiple experiments or laboratories can be aligned at the root terms to facilitate query development. This approach of developing vocabularies is compatible with the major advances in database technology and could be used for building the Semantic Web.

  16. New concepts for building vocabulary for cell image ontologies

    PubMed Central

    2011-01-01

    Background There are significant challenges associated with the building of ontologies for cell biology experiments including the large numbers of terms and their synonyms. These challenges make it difficult to simultaneously query data from multiple experiments or ontologies. If vocabulary terms were consistently used and reused across and within ontologies, queries would be possible through shared terms. One approach to achieving this is to strictly control the terms used in ontologies in the form of a pre-defined schema, but this approach limits the individual researcher's ability to create new terms when needed to describe new experiments. Results Here, we propose the use of a limited number of highly reusable common root terms, and rules for an experimentalist to locally expand terms by adding more specific terms under more general root terms to form specific new vocabulary hierarchies that can be used to build ontologies. We illustrate the application of the method to build vocabularies and a prototype database for cell images that uses a visual data-tree of terms to facilitate sophisticated queries based on a experimental parameters. We demonstrate how the terminology might be extended by adding new vocabulary terms into the hierarchy of terms in an evolving process. In this approach, image data and metadata are handled separately, so we also describe a robust file-naming scheme to unambiguously identify image and other files associated with each metadata value. The prototype database http://sbd.nist.gov/ consists of more than 2000 images of cells and benchmark materials, and 163 metadata terms that describe experimental details, including many details about cell culture and handling. Image files of interest can be retrieved, and their data can be compared, by choosing one or more relevant metadata values as search terms. Metadata values for any dataset can be compared with corresponding values of another dataset through logical operations. Conclusions Organizing metadata for cell imaging experiments under a framework of rules that include highly reused root terms will facilitate the addition of new terms into a vocabulary hierarchy and encourage the reuse of terms. These vocabulary hierarchies can be converted into XML schema or RDF graphs for displaying and querying, but this is not necessary for using it to annotate cell images. Vocabulary data trees from multiple experiments or laboratories can be aligned at the root terms to facilitate query development. This approach of developing vocabularies is compatible with the major advances in database technology and could be used for building the Semantic Web. PMID:22188658

  17. Standards-based metadata procedures for retrieving data for display or mining utilizing persistent (data-DOI) identifiers.

    PubMed

    Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Rzepa, Henry S

    2015-01-01

    We describe three different procedures based on metadata standards for enabling automated retrieval of scientific data from digital repositories utilising the persistent identifier of the dataset with optional specification of the attributes of the data document such as filename or media type. The procedures are demonstrated using the JSmol molecular visualizer as a component of a web page and Avogadro as a stand-alone modelling program. We compare our methods for automated retrieval of data from a standards-compliant data repository with those currently in operation for a selection of existing molecular databases and repositories. Our methods illustrate the importance of adopting a standards-based approach of using metadata declarations to increase access to and discoverability of repository-based data. Graphical abstract.

  18. Design and implementation of the NPOI database and website

    NASA Astrophysics Data System (ADS)

    Newman, K.; Jorgensen, A. M.; Landavazo, M.; Sun, B.; Hutter, D. J.; Armstrong, J. T.; Mozurkewich, David; Elias, N.; van Belle, G. T.; Schmitt, H. R.; Baines, E. K.

    2014-07-01

    The Navy Precision Optical Interferometer (NPOI) has been recording astronomical observations for nearly two decades, at this point with hundreds of thousands of individual observations recorded to date for a total data volume of many terabytes. To make maximum use of the NPOI data it is necessary to organize them in an easily searchable manner and be able to extract essential diagnostic information from the data to allow users to quickly gauge data quality and suitability for a specific science investigation. This sets the motivation for creating a comprehensive database of observation metadata as well as, at least, reduced data products. The NPOI database is implemented in MySQL using standard database tools and interfaces. The use of standard database tools allows us to focus on top-level database and interface implementation and take advantage of standard features such as backup, remote access, mirroring, and complex queries which would otherwise be time-consuming to implement. A website was created in order to give scientists a user friendly interface for searching the database. It allows the user to select various metadata to search for and also allows them to decide how and what results are displayed. This streamlines the searches, making it easier and quicker for scientists to find the information they are looking for. The website has multiple browser and device support. In this paper we present the design of the NPOI database and website, and give examples of its use.

  19. MOPED 2.5—An Integrated Multi-Omics Resource: Multi-Omics Profiling Expression Database Now Includes Transcriptomics Data

    PubMed Central

    Montague, Elizabeth; Stanberry, Larissa; Higdon, Roger; Janko, Imre; Lee, Elaine; Anderson, Nathaniel; Choiniere, John; Stewart, Elizabeth; Yandl, Gregory; Broomall, William; Kolker, Natali

    2014-01-01

    Abstract Multi-omics data-driven scientific discovery crucially rests on high-throughput technologies and data sharing. Currently, data are scattered across single omics repositories, stored in varying raw and processed formats, and are often accompanied by limited or no metadata. The Multi-Omics Profiling Expression Database (MOPED, http://moped.proteinspire.org) version 2.5 is a freely accessible multi-omics expression database. Continual improvement and expansion of MOPED is driven by feedback from the Life Sciences Community. In order to meet the emergent need for an integrated multi-omics data resource, MOPED 2.5 now includes gene relative expression data in addition to protein absolute and relative expression data from over 250 large-scale experiments. To facilitate accurate integration of experiments and increase reproducibility, MOPED provides extensive metadata through the Data-Enabled Life Sciences Alliance (DELSA Global, http://delsaglobal.org) metadata checklist. MOPED 2.5 has greatly increased the number of proteomics absolute and relative expression records to over 500,000, in addition to adding more than four million transcriptomics relative expression records. MOPED has an intuitive user interface with tabs for querying different types of omics expression data and new tools for data visualization. Summary information including expression data, pathway mappings, and direct connection between proteins and genes can be viewed on Protein and Gene Details pages. These connections in MOPED provide a context for multi-omics expression data exploration. Researchers are encouraged to submit omics data which will be consistently processed into expression summaries. MOPED as a multi-omics data resource is a pivotal public database, interdisciplinary knowledge resource, and platform for multi-omics understanding. PMID:24910945

  20. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1997

    1997-01-01

    Presents abstracts of SIG Sessions. Highlights include digital collections; information retrieval methods; public interest/fair use; classification and indexing; electronic publication; funding; globalization; information technology projects; interface design; networking in developing countries; metadata; multilingual databases; networked…

  1. ENVIRONMENTAL INFORMATION MANAGEMENT SYSTEM (EIMS)

    EPA Science Inventory

    The Environmental Information Management System (EIMS) organizes descriptive information (metadata) for data sets, databases, documents, models, projects, and spatial data. The EIMS design provides a repository for scientific documentation that can be easily accessed with standar...

  2. Extending the Reach of IGSN Beyond Earth: Implementing IGSN Registration to Link Nasa's Apollo Lunar Samples and Their Data

    NASA Technical Reports Server (NTRS)

    Todd, Nancy S.

    2016-01-01

    The rock and soil samples returned from the Apollo missions from 1969-72 have supported 46 years of research leading to advances in our understanding of the formation and evolution of the inner Solar System. NASA has been engaged in several initiatives that aim to restore, digitize, and make available to the public existing published and unpublished research data for the Apollo samples. One of these initiatives is a collaboration with IEDA (Interdisciplinary Earth Data Alliance) to develop MoonDB, a lunar geochemical database modeled after PetDB (Petrological Database of the Ocean Floor). In support of this initiative, NASA has adopted the use of IGSN (International Geo Sample Number) to generate persistent, unique identifiers for lunar samples that scientists can use when publishing research data. To facilitate the IGSN registration of the original 2,200 samples and over 120,000 subdivided samples, NASA has developed an application that retrieves sample metadata from the Lunar Curation Database and uses the SESAR API to automate the generation of IGSNs and registration of samples into SESAR (System for Earth Sample Registration). This presentation will describe the work done by NASA to map existing sample metadata to the IGSN metadata and integrate the IGSN registration process into the sample curation workflow, the lessons learned from this effort, and how this work can be extended in the future to help deal with the registration of large numbers of samples.

  3. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    NASA Astrophysics Data System (ADS)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  4. Chapter 35: Describing Data and Data Collections in the VO

    NASA Astrophysics Data System (ADS)

    Kent, B. R.; Hanisch, R. J.; Williams, R. D.

    The list of numbers: 19.22, 17.23, 18.11, 16.98, and 15.11, is of little intrinsic interest without information about the context in which they appear. For instance, are these daily closing stock prices for your favorite investment, or are they hourly photometric measurements of an increasingly bright quasar? The information needed to define this context is called metadata. Metadata are data about data. Astronomers are familiar with metadata through the headers of FITS files and the names and units associated with columns in a table or database. In the VO, metadata describe the contents of tables, images, and spectra, as well as aggregate collections of data (archives, surveys) and computational services. Moreover, VO metadata are constructed according to rules that avoid ambiguity and make it clear whether, in the example above, the stock prices are in dollars or euros, or the photometry is Johnson V or Sloan g. Organization of data is important in any scientific discipline. Equally crucial are the descriptions of that data: the organization publishing the data, its creator or the person making it available, what instruments were used, units assigned to measurement, calibration status, and data quality assessment. The Virtual Observatory metadata scheme not only applies to datasets, but to resources as well, including data archive facilities, searchable web forms, and online analysis and display tools. Since the scientific output flowing from large datasets depends greatly on how well the data are described, it is important for users to understand the basics of the metadata scheme in order to locate the data that they want and use it correctly. Metadata are the key to data discovery and data and service interoperability in the Virtual Observatory.

  5. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions

    PubMed Central

    Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.

    2014-01-01

    Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498

  6. A Grid Metadata Service for Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; Negro, Alessandro; Aloisio, Giovanni

    2010-05-01

    Critical challenges for climate modeling researchers are strongly connected with the increasingly complex simulation models and the huge quantities of produced datasets. Future trends in climate modeling will only increase computational and storage requirements. For this reason the ability to transparently access to both computational and data resources for large-scale complex climate simulations must be considered as a key requirement for Earth Science and Environmental distributed systems. From the data management perspective (i) the quantity of data will continuously increases, (ii) data will become more and more distributed and widespread, (iii) data sharing/federation will represent a key challenging issue among different sites distributed worldwide, (iv) the potential community of users (large and heterogeneous) will be interested in discovery experimental results, searching of metadata, browsing collections of files, compare different results, display output, etc.; A key element to carry out data search and discovery, manage and access huge and distributed amount of data is the metadata handling framework. What we propose for the management of distributed datasets is the GRelC service (a data grid solution focusing on metadata management). Despite the classical approaches, the proposed data-grid solution is able to address scalability, transparency, security and efficiency and interoperability. The GRelC service we propose is able to provide access to metadata stored in different and widespread data sources (relational databases running on top of MySQL, Oracle, DB2, etc. leveraging SQL as query language, as well as XML databases - XIndice, eXist, and libxml2 based documents, adopting either XPath or XQuery) providing a strong data virtualization layer in a grid environment. Such a technological solution for distributed metadata management leverages on well known adopted standards (W3C, OASIS, etc.); (ii) supports role-based management (based on VOMS), which increases flexibility and scalability; (iii) provides full support for Grid Security Infrastructure, which means (authorization, mutual authentication, data integrity, data confidentiality and delegation); (iv) is compatible with existing grid middleware such as gLite and Globus and finally (v) is currently adopted at the Euro-Mediterranean Centre for Climate Change (CMCC - Italy) to manage the entire CMCC data production activity as well as in the international Climate-G testbed.

  7. Mashup of Geo and Space Science Data Provided via Relational Databases in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Ritschel, B.; Seelus, C.; Neher, G.; Iyemori, T.; Koyama, Y.; Yatagai, A. I.; Murayama, Y.; King, T. A.; Hughes, J. S.; Fung, S. F.; Galkin, I. A.; Hapgood, M. A.; Belehaki, A.

    2014-12-01

    The use of RDBMS for the storage and management of geo and space science data and/or metadata is very common. Although the information stored in tables is based on a data model and therefore well organized and structured, a direct mashup with RDF based data stored in triple stores is not possible. One solution of the problem consists in the transformation of the whole content into RDF structures and storage in triple stores. Another interesting way is the use of a specific system/service, such as e.g. D2RQ, for the access to relational database content as virtual, read only RDF graphs. The Semantic Web based -proof of concept- GFZ ISDC uses the triple store Virtuoso for the storage of general context information/metadata to geo and space science satellite and ground station data. There is information about projects, platforms, instruments, persons, product types, etc. available but no detailed metadata about the data granuals itself. Such important information, as e.g. start or end time or the detailed spatial coverage of a single measurement is stored in RDBMS tables of the ISDC catalog system only. In order to provide a seamless access to all available information about the granuals/data products a mashup of the different data resources (triple store and RDBMS) is necessary. This paper describes the use of D2RQ for a Semantic Web/SPARQL based mashup of relational databases used for ISDC data server but also for the access to IUGONET and/or ESPAS and further geo and space science data resources. RDBMS Relational Database Management System RDF Resource Description Framework SPARQL SPARQL Protocol And RDF Query Language D2RQ Accessing Relational Databases as Virtual RDF Graphs GFZ ISDC German Research Centre for Geosciences Information System and Data Center IUGONET Inter-university Upper Atmosphere Global Observation Network (Japanese project) ESPAS Near earth space data infrastructure for e-science (European Union funded project)

  8. An Open-source Toolbox for Analysing and Processing PhysioNet Databases in MATLAB and Octave.

    PubMed

    Silva, Ikaro; Moody, George B

    The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet's software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet's software and databases.

  9. Design and deployment of a large brain-image database for clinical and nonclinical research

    NASA Astrophysics Data System (ADS)

    Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.

    2004-04-01

    An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.

  10. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  11. GeneLab Analysis Working Group Kick-Off Meeting

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.

    2018-01-01

    Goals to achieve for GeneLab AWG - GL vision - Review of GeneLab AWG charter Timeline and milestones for 2018 Logistics - Monthly Meeting - Workshop - Internship - ASGSR Introduction of team leads and goals of each group Introduction of all members Q/A Three-tier Client Strategy to Democratize Data Physiological changes, pathway enrichment, differential expression, normalization, processing metadata, reproducibility, Data federation/integration with heterogeneous bioinformatics external databases The GLDS currently serves over 100 omics investigations to the biomedical community via open access. In order to expand the scope of metadata record searches via the GLDS, we designed a metadata warehouse that collects and updates metadata records from external systems housing similar data. To demonstrate the capabilities of federated search and retrieval of these data, we imported metadata records from three open-access data systems into the GLDS metadata warehouse: NCBI's Gene Expression Omnibus (GEO), EBI's PRoteomics IDEntifications (PRIDE) repository, and the Metagenomics Analysis server (MG-RAST). Each of these systems defines metadata for omics data sets differently. One solution to bridge such differences is to employ a common object model (COM) to which each systems' representation of metadata can be mapped. Warehoused metadata records are then transformed at ETL to this single, common representation. Queries generated via the GLDS are then executed against the warehouse, and matching records are shown in the COM representation (Fig. 1). While this approach is relatively straightforward to implement, the volume of the data in the omics domain presents challenges in dealing with latency and currency of records. Furthermore, the lack of a coordinated has been federated data search for and retrieval of these kinds of data across other open-access systems, so that users are able to conduct biological meta-investigations using data from a variety of sources. Such meta-investigations are key to corroborating findings from many kinds of assays and translating them into systems biology knowledge and, eventually, therapeutics.

  12. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  13. Towards structured sharing of raw and derived neuroimaging data across existing resources

    PubMed Central

    Keator, D.B.; Helmer, K.; Steffener, J.; Turner, J.A.; Van Erp, T.G.M.; Gadde, S.; Ashish, N.; Burns, G.A.; Nichols, B.N.

    2013-01-01

    Data sharing efforts increasingly contribute to the acceleration of scientific discovery. Neuroimaging data is accumulating in distributed domain-specific databases and there is currently no integrated access mechanism nor an accepted format for the critically important meta-data that is necessary for making use of the combined, available neuroimaging data. In this manuscript, we present work from the Derived Data Working Group, an open-access group sponsored by the Biomedical Informatics Research Network (BIRN) and the International Neuroimaging Coordinating Facility (INCF) focused on practical tools for distributed access to neuroimaging data. The working group develops models and tools facilitating the structured interchange of neuroimaging meta-data and is making progress towards a unified set of tools for such data and meta-data exchange. We report on the key components required for integrated access to raw and derived neuroimaging data as well as associated meta-data and provenance across neuroimaging resources. The components include (1) a structured terminology that provides semantic context to data, (2) a formal data model for neuroimaging with robust tracking of data provenance, (3) a web service-based application programming interface (API) that provides a consistent mechanism to access and query the data model, and (4) a provenance library that can be used for the extraction of provenance data by image analysts and imaging software developers. We believe that the framework and set of tools outlined in this manuscript have great potential for solving many of the issues the neuroimaging community faces when sharing raw and derived neuroimaging data across the various existing database systems for the purpose of accelerating scientific discovery. PMID:23727024

  14. A document centric metadata registration tool constructing earth environmental data infrastructure

    NASA Astrophysics Data System (ADS)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool provides the section titles of the documents and by filling out the content of each section, the documents for the data sets are automatically published in PDF and HTML format. Furthermore, the metadata XML file which is compliant with ISO19115 and ISO19139 is created at the same moment. The generated metadata are managed in the metadata database of the DIAS project, and will be used in various ISO19139 compliant metadata management tools, such as GeoNetwork.

  15. Information System through ANIS at CeSAM

    NASA Astrophysics Data System (ADS)

    Moreau, C.; Agneray, F.; Gimenez, S.

    2015-09-01

    ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).

  16. XRootD popularity on hadoop clusters

    NASA Astrophysics Data System (ADS)

    Meoni, Marco; Boccali, Tommaso; Magini, Nicolò; Menichetti, Luca; Giordano, Domenico; CMS Collaboration

    2017-10-01

    Performance data and metadata of the computing operations at the CMS experiment are collected through a distributed monitoring infrastructure, currently relying on a traditional Oracle database system. This paper shows how to harness Big Data architectures in order to improve the throughput and the efficiency of such monitoring. A large set of operational data - user activities, job submissions, resources, file transfers, site efficiencies, software releases, network traffic, machine logs - is being injected into a readily available Hadoop cluster, via several data streamers. The collected metadata is further organized running fast arbitrary queries; this offers the ability to test several Map&Reduce-based frameworks and measure the system speed-up when compared to the original database infrastructure. By leveraging a quality Hadoop data store and enabling an analytics framework on top, it is possible to design a mining platform to predict dataset popularity and discover patterns and correlations.

  17. Metadata from data: identifying holidays from anesthesia data.

    PubMed

    Starnes, Joseph R; Wanderer, Jonathan P; Ehrenfeld, Jesse M

    2015-05-01

    The increasingly large databases available to researchers necessitate high-quality metadata that is not always available. We describe a method for generating this metadata independently. Cluster analysis and expectation-maximization were used to separate days into holidays/weekends and regular workdays using anesthesia data from Vanderbilt University Medical Center from 2004 to 2014. This classification was then used to describe differences between the two sets of days over time. We evaluated 3802 days and correctly categorized 3797 based on anesthesia case time (representing an error rate of 0.13%). Use of other metrics for categorization, such as billed anesthesia hours and number of anesthesia cases per day, led to similar results. Analysis of the two categories showed that surgical volume increased more quickly with time for non-holidays than holidays (p < 0.001). We were able to successfully generate metadata from data by distinguishing holidays based on anesthesia data. This data can then be used for economic analysis and scheduling purposes. It is possible that the method can be expanded to similar bimodal and multimodal variables.

  18. Overview of FEED, the feeding experiments end-user database.

    PubMed

    Wall, Christine E; Vinyard, Christopher J; Williams, Susan H; Gapeyev, Vladimir; Liu, Xianhua; Lapp, Hilmar; German, Rebecca Z

    2011-08-01

    The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.

  19. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource

    PubMed Central

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E.; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data. PMID:21045053

  20. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource.

    PubMed

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.

  1. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    NASA Astrophysics Data System (ADS)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE-compliant services relating to the dataset. The web front-end therefore enables users to effectively filter, sort, or search the metadata. As the MSFD timeline requires Member States to review their progress on achieving or maintaining GES every six years, the timely development of this metadata portal will not only aid interested stakeholders in understanding how member states are meeting their targets, but also shows how linked data can be used effectively to support policy makers and associated legislative bodies.

  2. Last Deglacial Sea Level: A Curated Database of Indicators of Past Sea Levels from Biological and Geomorphological Archives

    NASA Astrophysics Data System (ADS)

    Hibbert, F. D.; Williams, F. H.; Fallon, S.; Rohling, E. J.

    2017-12-01

    The last deglacial was an interval of rapid climate and sea-level change, including the collapse of large continental ice sheets. This database collates carefully assessed sea-level data from peer-reviewed sources for the interval 0 to 25 thousand years ago (ka), from the last glacial maximum to the present interglacial conditions. In addition to facilitating site-specific reconstructions of past sea levels, the database provides a suite of data beyond the range of modern/instrumental variability that may help hone future sea-level projections. The database is global in scope, internally consistent, and contains U-series and radiocarbon dated indicators from both biological and geomorpohological archives. We focus on far-field data (i.e., away from the sites of the former continental ice sheets), but some key intermediate (i.e., from the Caribbean) data are also included. All primary fields (i.e., sample location, elevation, age and context) possess quantified uncertainties, which - in conjunction with available metadata - allows the reconstructed sea levels to be interpreted within both their uncertainties and geological context. Consistent treatment of each of the individual records in the database, and incorporation of fully expressed uncertainties, allows datasets to be easily compared. The compilation contains 145 studies from 40 locations (>2,000 data points) and includes all raw information and metadata.

  3. Perspective: Interactive material property databases through aggregation of literature data

    NASA Astrophysics Data System (ADS)

    Seshadri, Ram; Sparks, Taylor D.

    2016-05-01

    Searchable, interactive, databases of material properties, particularly those relating to functional materials (magnetics, thermoelectrics, photovoltaics, etc.) are curiously missing from discussions of machine-learning and other data-driven methods for advancing new materials discovery. Here we discuss the manual aggregation of experimental data from the published literature for the creation of interactive databases that allow the original experimental data as well additional metadata to be visualized in an interactive manner. The databases described involve materials for thermoelectric energy conversion, and for the electrodes of Li-ion batteries. The data can be subject to machine-learning, accelerating the discovery of new materials.

  4. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  5. Development of a Big Data Application Architecture for Navy Manpower, Personnel, Training, and Education

    DTIC Science & Technology

    2016-03-01

    science IT information technology JBOD just a bunch of disks JDBC java database connectivity xviii JPME Joint Professional Military Education JSO...Joint Service Officer JVM java virtual machine MPP massively parallel processing MPTE Manpower, Personnel, Training, and Education NAVMAC Navy...27 external database, whether it is MySQL , Oracle, DB2, or SQL Server (Teller, 2015). Connectors optimize the data transfer by obtaining metadata

  6. Review of Spatial-Database System Usability: Recommendations for the ADDNS Project

    DTIC Science & Technology

    2007-12-01

    basic GIS background information , with a closer look at spatial databases. A GIS is also a computer- based system designed to capture, manage...foundation for deploying enterprise-wide spatial information systems . According to Oracle® [18], it enables accurate delivery of location- based services...Toronto TR 2007-141 Lanter, D.P. (1991). Design of a lineage- based meta-data base for GIS. Cartography and Geographic Information Systems , 18

  7. HomeBank: An Online Repository of Daylong Child-Centered Audio Recordings

    PubMed Central

    VanDam, Mark; Warlaumont, Anne S.; Bergelson, Elika; Cristia, Alejandrina; Soderstrom, Melanie; De Palma, Paul; MacWhinney, Brian

    2017-01-01

    HomeBank is introduced here. It is a public, permanent, extensible, online database of daylong audio recorded in naturalistic environments. HomeBank serves two primary purposes. First, it is a repository for raw audio and associated files: one database requires special permissions, and another redacted database allows unrestricted public access. Associated files include metadata such as participant demographics and clinical diagnostics, automated annotations, and human-generated transcriptions and annotations. Many recordings use the child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States), but various recordings and metadata can be accommodated. The HomeBank database can have both vetted and unvetted recordings, with different levels of accessibility. Additionally, HomeBank is an open repository for processing and analysis tools for HomeBank or similar data sets. HomeBank is flexible for users and contributors, making primary data available to researchers, especially those in child development, linguistics, and audio engineering. HomeBank facilitates researchers’ access to large-scale data and tools, linking the acoustic, auditory, and linguistic characteristics of children’s environments with a variety of variables including socioeconomic status, family characteristics, language trajectories, and disorders. Automated processing applied to daylong home audio recordings is now becoming widely used in early intervention initiatives, helping parents to provide richer speech input to at-risk children. PMID:27111272

  8. User’s Guide and Metadata for the PICES Nonindigenous Species Information System

    EPA Science Inventory

    The database, the "PICES Nonindigenous Species Information System", was constucted to synthesize the global distributions, environmental tolerances, and natural history attributes of the nonindigenous species in the North Pacific and Hawaii. The User's Guide provides th...

  9. PMAG: Relational Database Definition

    NASA Astrophysics Data System (ADS)

    Keizer, P.; Koppers, A.; Tauxe, L.; Constable, C.; Genevey, A.; Staudigel, H.; Helly, J.

    2002-12-01

    The Scripps center for Physical and Chemical Earth References (PACER) was established to help create databases for reference data and make them available to the Earth science community. As part of these efforts PACER supports GERM, REM and PMAG and maintains multiple online databases under the http://earthref.org umbrella website. This website has been built on top of a relational database that allows for the archiving and electronic access to a great variety of data types and formats, permitting data queries using a wide range of metadata. These online databases are designed in Oracle 8.1.5 and they are maintained at the San Diego Supercomputer Center. They are directly available via http://earthref.org/databases/. A prototype of the PMAG relational database is now operational within the existing EarthRef.org framework under http://earthref.org/databases/PMAG/. As will be shown in our presentation, the PMAG design focuses around the general workflow that results in the determination of typical paleo-magnetic analyses. This ensures that individual data points can be traced between the actual analysis and the specimen, sample, site, locality and expedition it belongs to. These relations guarantee traceability of the data by distinguishing between original and derived data, where the actual (raw) measurements are performed on the specimen level, and data on the sample level and higher are then derived products in the database. These relations may also serve to recalculate site means when new data becomes available for that locality. The PMAG data records are extensively described in terms of metadata. These metadata are used when scientists search through this online database in order to view and download their needed data. They minimally include method descriptions for field sampling, laboratory techniques and statistical analyses. They also include selection criteria used during the interpretation of the data and, most importantly, critical information about the site location (latitude, longitude, elevation), geography (continent, country, region), geological setting (lithospheric plate or block, tectonic setting), geological age (age range, timescale name, stratigraphic position) and materials (rock type, classification, alteration state). Each data point and method description is also related to its peer-reviewed reference [citation ID] as archived in the EarthRef Reference Database (ERR). This guarantees direct traceability all the way to its original source, where the user can find the bibliography of each PMAG reference along with every abstract, data table, technical note and/or appendix that are available in digital form and that can be downloaded as PDF/JPEG images and Microsoft Excel/Word data files. This may help scientists and teachers in performing their research since they have easy access to all the scientific data. It also allows for checking potential errors during the digitization process. Please visit the PMAG website at http://earthref.org/PMAG/ for more information.

  10. Spectral signature verification using statistical analysis and text mining

    NASA Astrophysics Data System (ADS)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is present for comparison. The spectral validation method proposed is described from a practical application and analytical perspective.

  11. Modernized Techniques for Dealing with Quality Data and Derived Products

    NASA Astrophysics Data System (ADS)

    Neiswender, C.; Miller, S. P.; Clark, D.

    2008-12-01

    "I just want a picture of the ocean floor in this area" is expressed all too often by researchers, educators, and students in the marine geosciences. As more sophisticated systems are developed to handle data collection and processing, the demand for quality data, and standardized products continues to grow. Data management is an invisible bridge between science and researchers/educators. The SIOExplorer digital library presents more than 50 years of ocean-going research. Prior to publication, all data is checked for quality using standardized criterion developed for each data stream. Despite the evolution of data formats and processing systems, SIOExplorer continues to present derived products in well- established formats. Standardized products are published for each cruise, and include a cruise report, MGD77 merged data, multi-beam flipbook, and underway profiles. Creation of these products is made possible by processing scripts, which continue to change with ever-evolving data formats. We continue to explore the potential of database-enabled creation of standardized products, such as the metadata-rich MGD77 header file. Database-enabled, automated processing produces standards-compliant metadata for each data and derived product. Metadata facilitates discovery and interpretation of published products. This descriptive information is stored both in an ASCII file, and a searchable digital library database. SIOExplorer's underlying technology allows focused search and retrieval of data and products. For example, users can initiate a search of only multi-beam data, which includes data-specific parameters. This customization is made possible with a synthesis of database, XML, and PHP technology. The combination of standardized products and digital library technology puts quality data and derived products in the hands of scientists. Interoperable systems enable distribution these published resources using technology such as web services. By developing modernized strategies to deal with data, Scripps Institution of Oceanography is able to produce and distribute well-formed, and quality-tested derived products, which aid research, understanding, and education.

  12. Tropospheric Ozone Assessment Report: Database and Metrics Data of Global Surface Ozone Observations

    DOE PAGES

    Schultz, Martin G.; Schroder, Sabine; Lyapina, Olga; ...

    2017-11-27

    In support of the first Tropospheric Ozone Assessment Report (TOAR) a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues, which are part of this special feature. Cooperation among many data centers and individual researchers worldwide made it possible to buildmore » the world's largest collection of in-situ hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. Considerable effort was made to harmonize and synthesize data formats and metadata information from various networks and individual data submissions. Extensive quality control was applied to identify questionable and erroneous data, including changes in apparent instrument offsets or calibrations. Such data were excluded from TOAR data products. Limitations of a posteriori data quality assurance are discussed. As a result of the work presented here, global coverage of surface ozone data for scientific analysis has been significantly extended. Yet, large gaps remain in the surface observation network both in terms of regions without monitoring, and in terms of regions that have monitoring programs but no public access to the data archive. Therefore future improvements to the database will require not only improved data harmonization, but also expanded data sharing and increased monitoring in data-sparse regions.« less

  13. A Semantically Enabled Metadata Repository for Solar Irradiance Data Products

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.

    2014-12-01

    The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in creating the triplestore, including ontologies that came with VIVO such as FOAF. Also, the W3C DCAT ontology was integrated and extended to describe properties of our data products that we needed to capture, such as spectral range. The presentation will describe the architecture, ontology issues, and tools used to create LEMR and plans for its evolution.

  14. Tropospheric Ozone Assessment Report: Database and Metrics Data of Global Surface Ozone Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, Martin G.; Schroder, Sabine; Lyapina, Olga

    In support of the first Tropospheric Ozone Assessment Report (TOAR) a relational database of global surface ozone observations has been developed and populated with hourly measurement data and enhanced metadata. A comprehensive suite of ozone data products including standard statistics, health and vegetation impact metrics, and trend information, are made available through a common data portal and a web interface. These data form the basis of the TOAR analyses focusing on human health, vegetation, and climate relevant ozone issues, which are part of this special feature. Cooperation among many data centers and individual researchers worldwide made it possible to buildmore » the world's largest collection of in-situ hourly surface ozone data covering the period from 1970 to 2015. By combining the data from almost 10,000 measurement sites around the world with global metadata information, new analyses of surface ozone have become possible, such as the first globally consistent characterisations of measurement sites as either urban or rural/remote. Exploitation of these global metadata allows for new insights into the global distribution, and seasonal and long-term changes of tropospheric ozone and they enable TOAR to perform the first, globally consistent analysis of present-day ozone concentrations and recent ozone changes with relevance to health, agriculture, and climate. Considerable effort was made to harmonize and synthesize data formats and metadata information from various networks and individual data submissions. Extensive quality control was applied to identify questionable and erroneous data, including changes in apparent instrument offsets or calibrations. Such data were excluded from TOAR data products. Limitations of a posteriori data quality assurance are discussed. As a result of the work presented here, global coverage of surface ozone data for scientific analysis has been significantly extended. Yet, large gaps remain in the surface observation network both in terms of regions without monitoring, and in terms of regions that have monitoring programs but no public access to the data archive. Therefore future improvements to the database will require not only improved data harmonization, but also expanded data sharing and increased monitoring in data-sparse regions.« less

  15. UAV field demonstration of social media enabled tactical data link

    NASA Astrophysics Data System (ADS)

    Olson, Christopher C.; Xu, Da; Martin, Sean R.; Castelli, Jonathan C.; Newman, Andrew J.

    2015-05-01

    This paper addresses the problem of enabling Command and Control (C2) and data exfiltration functions for missions using small, unmanned, airborne surveillance and reconnaissance platforms. The authors demonstrated the feasibility of using existing commercial wireless networks as the data transmission infrastructure to support Unmanned Aerial Vehicle (UAV) autonomy functions such as transmission of commands, imagery, metadata, and multi-vehicle coordination messages. The authors developed and integrated a C2 Android application for ground users with a common smart phone, a C2 and data exfiltration Android application deployed on-board the UAVs, and a web server with database to disseminate the collected data to distributed users using standard web browsers. The authors performed a mission-relevant field test and demonstration in which operators commanded a UAV from an Android device to search and loiter; and remote users viewed imagery, video, and metadata via web server to identify and track a vehicle on the ground. Social media served as the tactical data link for all command messages, images, videos, and metadata during the field demonstration. Imagery, video, and metadata were transmitted from the UAV to the web server via multiple Twitter, Flickr, Facebook, YouTube, and similar media accounts. The web server reassembled images and video with corresponding metadata for distributed users. The UAV autopilot communicated with the on-board Android device via on-board Bluetooth network.

  16. Discovering Physical Samples Through Identifiers, Metadata, and Brokering

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Hills, D. J.; Jenkyns, R.

    2015-12-01

    Physical samples, particularly in the geosciences, are key to understanding the Earth system, its history, and its evolution. Our record of the Earth as captured by physical samples is difficult to explain and mine for understanding, due to incomplete, disconnected, and evolving metadata content. This is further complicated by differing ways of classifying, cataloguing, publishing, and searching the metadata, especially when specimens do not fit neatly into a single domain—for example, fossils cross disciplinary boundaries (mineral and biological). Sometimes even the fundamental classification systems evolve, such as the geological time scale, triggering daunting processes to update existing specimen databases. Increasingly, we need to consider ways of leveraging permanent, unique identifiers, as well as advancements in metadata publishing that link digital records with physical samples in a robust, adaptive way. An NSF EarthCube Research Coordination Network (RCN) called the Internet of Samples (iSamples) is now working to bridge the metadata schemas for biological and geological domains. We are leveraging the International Geo Sample Number (IGSN) that provides a versatile system of registering physical samples, and working to harmonize this with the DataCite schema for Digital Object Identifiers (DOI). A brokering approach for linking disparate catalogues and classification systems could help scale discovery and access to the many large collections now being managed (sometimes millions of specimens per collection). This presentation is about our community building efforts, research directions, and insights to date.

  17. Development of a Searchable Metabolite Database and Simulator of Xenobiotic Metabolism

    EPA Science Inventory

    A computational tool (MetaPath) has been developed for storage and analysis of metabolic pathways and associated metadata. The system is capable of sophisticated text and chemical structure/substructure searching as well as rapid comparison of metabolites formed across chemicals,...

  18. A Guided Tour of Saada

    NASA Astrophysics Data System (ADS)

    Michel, L.; Motch, C.; Nguyen Ngoc, H.; Pineau, F. X.

    2009-09-01

    Saada (http://amwdb.u-strasbg.fr/saada) is a tool for helping astronomers build local archives without writing any code (Michel et al. 2004). Databases created by Saada can host collections of heterogeneous data files. These data collections can also be published in the VO. An overview of the main Saada features is presented in this demo: creation of a basic database, creation of relationships, data searches using SaadaQL, metadata tagging, and use of VO services.

  19. Application description and policy model in collaborative environment for sharing of information on epidemiological and clinical research data sets.

    PubMed

    de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo

    2010-02-19

    Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.

  20. The National Extreme Events Data and Research Center (NEED)

    NASA Astrophysics Data System (ADS)

    Gulledge, J.; Kaiser, D. P.; Wilbanks, T. J.; Boden, T.; Devarakonda, R.

    2014-12-01

    The Climate Change Science Institute at Oak Ridge National Laboratory (ORNL) is establishing the National Extreme Events Data and Research Center (NEED), with the goal of transforming how the United States studies and prepares for extreme weather events in the context of a changing climate. NEED will encourage the myriad, distributed extreme events research communities to move toward the adoption of common practices and will develop a new database compiling global historical data on weather- and climate-related extreme events (e.g., heat waves, droughts, hurricanes, etc.) and related information about impacts, costs, recovery, and available research. Currently, extreme event information is not easy to access and is largely incompatible and inconsistent across web sites. NEED's database development will take into account differences in time frames, spatial scales, treatments of uncertainty, and other parameters and variables, and leverage informatics tools developed at ORNL (i.e., the Metadata Editor [1] and Mercury [2]) to generate standardized, robust documentation for each database along with a web-searchable catalog. In addition, NEED will facilitate convergence on commonly accepted definitions and standards for extreme events data and will enable integrated analyses of coupled threats, such as hurricanes/sea-level rise/flooding and droughts/wildfires. Our goal and vision is that NEED will become the premiere integrated resource for the general study of extreme events. References: [1] Devarakonda, Ranjeet, et al. "OME: Tool for generating and managing metadata to handle BigData." Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  1. The VIMS Data Explorer: A tool for locating and visualizing hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pasek, V. D.; Lytle, D. M.; Brown, R. H.

    2016-12-01

    Since successfully entering Saturn's orbit during Summer 2004 there have been over 300,000 hyperspectral data cubes returned from the visible and infrared mapping spectrometer (VIMS) instrument onboard the Cassini spacecraft. The VIMS Science Investigation is a multidisciplinary effort that uses these hyperspectral data to study a variety of scientific problems, including surface characterizations of the icy satellites and atmospheric analyses of Titan and Saturn. Such investigations may need to identify thousands of exemplary data cubes for analysis and can span many years in scope. Here we describe the VIMS data explorer (VDE) application, currently employed by the VIMS Investigation to search for and visualize data. The VDE application facilitates real-time inspection of the entire VIMS hyperspectral dataset, the construction of in situ maps, and markers to save and recall work. The application relies on two databases to provide comprehensive search capabilities. The first database contains metadata for every cube. These metadata searches are used to identify records based on parameters such as target, observation name, or date taken; they fall short in utility for some investigations. The cube metadata contains no target geometry information. Through the introduction of a post-calibration pixel database, the VDE tool enables users to greatly expand their searching capabilities. Users can select favorable cubes for further processing into 2-D and 3-D interactive maps, aiding in the data interpretation and selection process. The VDE application enables efficient search, visualization, and access to VIMS hyperspectral data. It is simple to use, requiring nothing more than a browser for access. Hyperspectral bands can be individually selected or combined to create real-time color images, a technique commonly employed by hyperspectral researchers to highlight compositional differences.

  2. Taxonomic annotation of public fungal ITS sequences from the built environment – a report from an April 10–11, 2017 workshop (Aberdeen, UK)

    PubMed Central

    Nilsson, R. Henrik; Taylor, Andy F. S.; Adams, Rachel I.; Baschien, Christiane; Johan Bengtsson-Palme; Cangren, Patrik; Coleine, Claudia; Heide-Marie Daniel; Glassman, Sydney I.; Hirooka, Yuuri; Irinyi, Laszlo; Reda Iršėnaitė; Pedro M. Martin-Sanchez; Meyer, Wieland; Seung-Yoon Oh; Jose Paulo Sampaio; Seifert, Keith A.; Sklenář, Frantisek; Dirk Stubbe; Suh, Sung-Oui; Summerbell, Richard; Svantesson, Sten; Martin Unterseher; Cobus M. Visagie; Weiss, Michael; Woudenberg, Joyce HC; Christian Wurzbacher; den Wyngaert, Silke Van; Yilmaz, Neriman; Andrey Yurkov; Kõljalg, Urmas; Abarenkov, Kessy

    2018-01-01

    Abstract Recent DNA-based studies have shown that the built environment is surprisingly rich in fungi. These indoor fungi – whether transient visitors or more persistent residents – may hold clues to the rising levels of human allergies and other medical and building-related health problems observed globally. The taxonomic identity of these fungi is crucial in such pursuits. Molecular identification of the built mycobiome is no trivial undertaking, however, given the large number of unidentified, misidentified, and technically compromised fungal sequences in public sequence databases. In addition, the sequence metadata required to make informed taxonomic decisions – such as country and host/substrate of collection – are often lacking even from reference and ex-type sequences. Here we report on a taxonomic annotation workshop (April 10–11, 2017) organized at the James Hutton Institute/University of Aberdeen (UK) to facilitate reproducible studies of the built mycobiome. The 32 participants went through public fungal ITS barcode sequences related to the built mycobiome for taxonomic and nomenclatural correctness, technical quality, and metadata availability. A total of 19,508 changes – including 4,783 name changes, 14,121 metadata annotations, and the removal of 99 technically compromised sequences – were implemented in the UNITE database for molecular identification of fungi (https://unite.ut.ee/) and shared with a range of other databases and downstream resources. Among the genera that saw the largest number of changes were Penicillium, Talaromyces, Cladosporium, Acremonium, and Alternaria, all of them of significant importance in both culture-based and culture-independent surveys of the built environment. PMID:29559822

  3. Observatory Bibliographies as Research Tools

    NASA Astrophysics Data System (ADS)

    Rots, Arnold H.; Winkelman, S. L.

    2013-01-01

    Traditionally, observatory bibliographies were maintained to provide insight in how successful a observatory is as measured by its prominence in the (refereed) literature. When we set up the bibliographic database for the Chandra X-ray Observatory (http://cxc.harvard.edu/cgi-gen/cda/bibliography) as part of the Chandra Data Archive ((http://cxc.harvard.edu/cda/), very early in the mission, our objective was to make it primarily a useful tool for our user community. To achieve this we are: (1) casting a very wide net in collecting Chandra-related publications; (2) including for each literature reference in the database a wealth of metadata that is useful for the users; and (3) providing specific links between the articles and the datasets in the archive that they use. As a result our users are able to browse the literature and the data archive simultaneously. As an added bonus, the rich metadata content and data links have also allowed us to assemble more meaningful statistics about the scientific efficacy of the observatory. In all this we collaborate closely with the Astrophysics Data System (ADS). Among the plans for future enhancement are the inclusion of press releases and the Chandra image gallery, linking with ADS semantic searching tools, full-text metadata mining, and linking with other observatories' bibliographies. This work is supported by NASA contract NAS8-03060 (CXC) and depends critically on the services provided by the ADS.

  4. Online Metadata Directories: A way of preserving, sharing and discovering scientific information

    NASA Technical Reports Server (NTRS)

    Meaux, M.

    2005-01-01

    The Global Change Master Directory (GCMD) assists the scientific community in the discovery of and linkage to Earth Science data and provides data holders a means to advertise their data to the community through its portals, i.e. online customized subset metadata directories. These directories are effectively serving communities like the Joint Committee on Antarctic Data Management (JCADM), the Global Observing System Information Center (GOSIC), and the Global Ocean Ecosystems Dynamic Program (GLOBEC) by increasing the visibility of their data holding. The purpose of the Gulf of Maine Ocean Data Partnership (GoMODP) is to "promote and coordinate the sharing, linking, electronic dissemination, and use of data on the Gulf of Maine region". The participants have decided that a "coordinated effort is needed to enable users throughout the Gulf of Maine region and beyond to discover and put to use the vast and growing quantities of data in their respective databases". GoMODP members have invited the GCMD to discuss potential collaborations associated with this effort. The presentation will focus on the use of the GCMD s metadata directory as a powerful tool for data discovery and sharing. An overview of the directory and its metadata authoring tools will be given.

  5. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file or in a storage/archive system. We are working on adding many features to help editor users, such as auto completion, pre-populating of forms, partial saving, as well as automatic schema validation. In this presentation we will demonstrate a few sample editors, including an FGDC editor and a bare bone editor for ISO 19115/19139. We will also demonstrate the use of templates during the definition phase, with the support of export and import functions. Form pre-population and input validation will also be covered. Theses modules are available as open-source software from the Islandora software foundation, as a component of a larger Drupal-based data archive system. They can be easily installed as stand-alone system, or to be plugged into other existing metadata platforms.

  6. DbMap: improving database interoperability issues in medical software using a simple, Java-Xml based solution.

    PubMed Central

    Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.

    2000-01-01

    In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915

  7. Combining Crowd and Expert Labels using Decision Theoretic Active Learning

    DTIC Science & Technology

    2015-10-11

    meta-data such as titles, author information and keywords. Motivating Application: Biomedical Systematic Reviews Evidence - based medicine (EBM) aims to...individuals trained in evidence - based medicine ; usually MDs) reading the entire set of citations retrieved via database search to identify the small

  8. Managing troubled data: Coastal data partnerships smooth data integration

    USGS Publications Warehouse

    Hale, S.S.; Hale, Miglarese A.; Bradley, M.P.; Belton, T.J.; Cooper, L.D.; Frame, M.T.; Friel, C.A.; Harwell, L.M.; King, R.E.; Michener, W.K.; Nicolson, D.T.; Peterjohn, B.G.

    2003-01-01

    Understanding the ecology, condition, and changes of coastal areas requires data from many sources. Broad-scale and long-term ecological questions, such as global climate change, biodiversity, and cumulative impacts of human activities, must be addressed with databases that integrate data from several different research and monitoring programs. Various barriers, including widely differing data formats, codes, directories, systems, and metadata used by individual programs, make such integration troublesome. Coastal data partnerships, by helping overcome technical, social, and organizational barriers, can lead to a better understanding of environmental issues, and may enable better management decisions. Characteristics of successful data partnerships include a common need for shared data, strong collaborative leadership, committed partners willing to invest in the partnership, and clear agreements on data standards and data policy. Emerging data and metadata standards that become widely accepted are crucial. New information technology is making it easier to exchange and integrate data. Data partnerships allow us to create broader databases than would be possible for any one organization to create by itself.

  9. Managing troubled data: coastal data partnerships smooth data integration.

    PubMed

    Hale, Stephen S; Miglarese, Anne Hale; Bradley, M Patricia; Belton, Thomas J; Cooper, Larry D; Frame, Michael T; Friel, Christopher A; Harwell, Linda M; King, Robert E; Michener, William K; Nicolson, David T; Peterjohn, Bruce G

    2003-01-01

    Understanding the ecology, condition, and changes of coastal areas requires data from many sources. Broad-scale and long-term ecological questions, such as global climate change, biodiversity, and cumulative impacts of human activities, must be addressed with databases that integrate data from several different research and monitoring programs. Various barriers, including widely differing data formats, codes, directories, systems, and metadata used by individual programs, make such integration troublesome. Coastal data partnerships, by helping overcome technical, social, and organizational barriers, can lead to a better understanding of environmental issues, and may enable better management decisions. Characteristics of successful data partnerships include a common need for shared data, strong collaborative leadership, committed partners willing to invest in the partnership, and clear agreements on data standards and data policy. Emerging data and metadata standards that become widely accepted are crucial. New information technology is making it easier to exchange and integrate data. Data partnerships allow us to create broader databases than would be possible for any one organization to create by itself.

  10. Sally Ride EarthKAM - Automated Image Geo-Referencing Using Google Earth Web Plug-In

    NASA Technical Reports Server (NTRS)

    Andres, Paul M.; Lazar, Dennis K.; Thames, Robert Q.

    2013-01-01

    Sally Ride EarthKAM is an educational program funded by NASA that aims to provide the public the ability to picture Earth from the perspective of the International Space Station (ISS). A computer-controlled camera is mounted on the ISS in a nadir-pointing window; however, timing limitations in the system cause inaccurate positional metadata. Manually correcting images within an orbit allows the positional metadata to be improved using mathematical regressions. The manual correction process is time-consuming and thus, unfeasible for a large number of images. The standard Google Earth program allows for the importing of KML (keyhole markup language) files that previously were created. These KML file-based overlays could then be manually manipulated as image overlays, saved, and then uploaded to the project server where they are parsed and the metadata in the database is updated. The new interface eliminates the need to save, download, open, re-save, and upload the KML files. Everything is processed on the Web, and all manipulations go directly into the database. Administrators also have the control to discard any single correction that was made and validate a correction. This program streamlines a process that previously required several critical steps and was probably too complex for the average user to complete successfully. The new process is theoretically simple enough for members of the public to make use of and contribute to the success of the Sally Ride EarthKAM project. Using the Google Earth Web plug-in, EarthKAM images, and associated metadata, this software allows users to interactively manipulate an EarthKAM image overlay, and update and improve the associated metadata. The Web interface uses the Google Earth JavaScript API along with PHP-PostgreSQL to present the user the same interface capabilities without leaving the Web. The simpler graphical user interface will allow the public to participate directly and meaningfully with EarthKAM. The use of similar techniques is being investigated to place ground-based observations in a Google Mars environment, allowing the MSL (Mars Science Laboratory) Science Team a means to visualize the rover and its environment.

  11. CDGP, the data center for deep geothermal data from Alsace

    NASA Astrophysics Data System (ADS)

    Schaming, Marc; Grunberg, Marc; Jahn, Markus; Schmittbuhl, Jean; Cuenot, Nicolas; Genter, Albert; Dalmais, Eléonore

    2016-04-01

    CDGP (Centre de données de géothermie profonde, deep geothermal data center, http://cdgp.u-strasbg.fr) is set by the LabEX G-EAU-THERMIE PROFONDE to archive the high quality data collected in the Upper Rhine Graben geothermal sites and to distribute them to the scientific community for R&D activities, taking IPR (Intellectual Property Rights) into account. Collected datasets cover the whole life of geothermal projects, from exploration to drilling, stimulation, circulation and production. They originate from the Soultz-sous-Forêts pilot plant but also include more recent projects like the ECOGI project at Rittershoffen, Alsace, France. They are historically separated in two rather independent categories: geophysical datasets mostly related to the industrial management of the geothermal reservoir and seismological data from the seismic monitoring both during stimulations and circulations. Geophysical datasets are mainly up to now from the Soultz-sous-Forêts project that were stored on office's shelves and old digital media. Some inventories have been done recently, and a first step of the integration of these reservoir data into a PostgreSQL/postGIS database (ISO 19107 compatible) has been performed. The database links depths, temperatures, pressures, flows, for periods (times) and locations (geometries). Other geophysical data are still stored in structured directories as a data bank and need to be included in the database. Seismological datasets are of two kinds: seismological waveforms and seismicity bulletins; the former are stored in a standardized way both in format (miniSEED) and in files and directories structures (SDS) following international standard of the seismological community (FDSN), and the latter in a database following the open standard QuakeML. CDGP uses a cataloging application (GeoNetwork) to manage the metadata resources. It provides metadata editing and search functions as well as a web map viewer. The metadata editor supports ISO19115/119/110 standards used for spatial resources. A step forward will be to add specific metadata records as defined by the Open Geospatial Consortium to provide geophysical / geologic / reservoir information: Observations and Measurements (O&M) to describe the acquisition of information from a primary source, and SensorML to describe the sensors. Seismological metadata, which describe all the instrumental response, use the dateless SEED standard. Access to data will be handled in an additional step using geOrchestra spatial data infrastructure (SDI). Direct access will be granted after registration and validation using the single sign-on authentication system. Access to the data will also be granted via EPOS-IP Anthropogenic Hazards project. Access to episodes (time-correlated collections of geophysical, technological and other relevant geo-data over a geothermal area) and application of analysis (time- and technology-dependent probabilistic seismic hazard analysis, multi-hazard and multi-risk assessment) are services accessible via a portal and will require AAAI (Authentication, Authorization, Accounting and Identification).

  12. Region 7 Laboratory Information Management System

    EPA Pesticide Factsheets

    This is metadata documentation for the Region 7 Laboratory Information Management System (R7LIMS) which maintains records for the Regional Laboratory. Any Laboratory analytical work performed is stored in this system which replaces LIMS-Lite, and before that LAST. The EPA and its contractors may use this database. The Office of Policy & Management (PLMG) Division at EPA Region 7 is the primary managing entity; contractors can access this database but it is not accessible to the public.

  13. Enhanced annotations and features for comparing thousands of Pseudomonas genomes in the Pseudomonas genome database.

    PubMed

    Winsor, Geoffrey L; Griffiths, Emma J; Lo, Raymond; Dhillon, Bhavjinder K; Shay, Julie A; Brinkman, Fiona S L

    2016-01-04

    The Pseudomonas Genome Database (http://www.pseudomonas.com) is well known for the application of community-based annotation approaches for producing a high-quality Pseudomonas aeruginosa PAO1 genome annotation, and facilitating whole-genome comparative analyses with other Pseudomonas strains. To aid analysis of potentially thousands of complete and draft genome assemblies, this database and analysis platform was upgraded to integrate curated genome annotations and isolate metadata with enhanced tools for larger scale comparative analysis and visualization. Manually curated gene annotations are supplemented with improved computational analyses that help identify putative drug targets and vaccine candidates or assist with evolutionary studies by identifying orthologs, pathogen-associated genes and genomic islands. The database schema has been updated to integrate isolate metadata that will facilitate more powerful analysis of genomes across datasets in the future. We continue to place an emphasis on providing high-quality updates to gene annotations through regular review of the scientific literature and using community-based approaches including a major new Pseudomonas community initiative for the assignment of high-quality gene ontology terms to genes. As we further expand from thousands of genomes, we plan to provide enhancements that will aid data visualization and analysis arising from whole-genome comparative studies including more pan-genome and population-based approaches. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Establishment of Kawasaki disease database based on metadata standard.

    PubMed

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-07-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.

  15. NASA Image eXchange (NIX)

    NASA Technical Reports Server (NTRS)

    vonOfenheim. William H. C.; Heimerl, N. Lynn; Binkley, Robert L.; Curry, Marty A.; Slater, Richard T.; Nolan, Gerald J.; Griswold, T. Britt; Kovach, Robert D.; Corbin, Barney H.; Hewitt, Raymond W.

    1998-01-01

    This paper discusses the technical aspects of and the project background for the NASA Image exchange (NIX). NIX, which provides a single entry point to search selected image databases at the NASA Centers, is a meta-search engine (i.e., a search engine that communicates with other search engines). It uses these distributed digital image databases to access photographs, animations, and their associated descriptive information (meta-data). NIX is available for use at the following URL: http://nix.nasa.gov./NIX, which was sponsored by NASAs Scientific and Technical Information (STI) Program, currently serves images from seven NASA Centers. Plans are under way to link image databases from three additional NASA Centers. images and their associated meta-data, which are accessible by NIX, reside at the originating Centers, and NIX utilizes a virtual central site that communicates with each of these sites. Incorporated into the virtual central site are several protocols to support searches from a diverse collection of database engines. The searches are performed in parallel to ensure optimization of response times. To augment the search capability, browse functionality with pre-defined categories has been built into NIX, thereby ensuring dissemination of 'best-of-breed' imagery. As a final recourse, NIX offers access to a help desk via an on-line form to help locate images and information either within the scope of NIX or from available external sources.

  16. The GTN-P Data Management System: A central database for permafrost monitoring parameters of the Global Terrestrial Network for Permafrost (GTN-P) and beyond

    NASA Astrophysics Data System (ADS)

    Lanckman, Jean-Pierre; Elger, Kirsten; Karlsson, Ævar Karl; Johannsson, Halldór; Lantuit, Hugues

    2013-04-01

    Permafrost is a direct indicator of climate change and has been identified as Essential Climate Variable (ECV) by the global observing community. The monitoring of permafrost temperatures, active-layer thicknesses and other parameters has been performed for several decades already, but it was brought together within the Global Terrestrial Network for Permafrost (GTN-P) in the 1990's only, including the development of measurement protocols to provide standardized data. GTN-P is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). All GTN-P data was outfitted with an "open data policy" with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: it is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of the programs. While the monitoring of many other ECVs has been tackled by organized international networks (e.g. FLUXNET), there is still no central database for all permafrost-related parameters. The European Union project PAGE21 created opportunities to develop this central database for permafrost monitoring parameters of GTN-P during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata, and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with the GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object oriented model (OOM), open for as many parameters as possible, and implemented into a spatial database. To ensure interoperability and enable potential inter-database search, field names are following international metadata standards and are based on a control vocabulary registry. Tools are developed to provide data processing, analysis capability, and quality control. Our system aims to be a reference model, improvable and reusable. It allows a maximum top-down and bottom-up data flow, giving scientists one global searchable data and metadata repository, the public a full access to scientific data, and the policy maker a powerful cartographic and statistical tool. To engage the international community in GTN-P, it was essential to develop an online interface for data upload. Aim for this was that it is easy-to-use and allows data input with a minimum of technical and personal effort. In addition to this, large efforts will have to be produced in order to be able to query, visualize and retrieve information over many platforms and type of measurements. Ultimately, it is not the layer in itself that matter, but more the relationship that these information layers maintain with each other.

  17. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  18. Acoustic Metadata Management and Transparent Access to Networked Oceanographic Data Sets

    DTIC Science & Technology

    2013-09-30

    connectivity (ODBC) compliant data source for which drivers are available (e.g. MySQL , Oracle database, Postgres) can now be imported. Implementation...the possibility of speeding data transmission through compression (implemented) or the potential to use alternative data formats such as Java script

  19. The GEOSS Clearinghouse based on the GeoNetwork opensource

    NASA Astrophysics Data System (ADS)

    Liu, K.; Yang, C.; Wu, H.; Huang, Q.

    2010-12-01

    The Global Earth Observation System of Systems (GEOSS) is established to support the study of the Earth system in a global community. It provides services for social management, quick response, academic research, and education. The purpose of GEOSS is to achieve comprehensive, coordinated and sustained observations of the Earth system, improve monitoring of the state of the Earth, increase understanding of Earth processes, and enhance prediction of the behavior of the Earth system. In 2009, GEO called for a competition for an official GEOSS clearinghouse to be selected as a source to consolidating catalogs for Earth observations. The Joint Center for Intelligent Spatial Computing at George Mason University worked with USGS to submit a solution based on the open-source platform - GeoNetwork. In the spring of 2010, the solution is selected as the product for GEOSS clearinghouse. The GEOSS Clearinghouse is a common search facility for the Intergovernmental Group on Ea rth Observation (GEO). By providing a list of harvesting functions in Business Logic, GEOSS clearinghouse can collect metadata from distributed catalogs including other GeoNetwork native nodes, webDAV/sitemap/WAF, catalog services for the web (CSW)2.0, GEOSS Component and Service Registry (http://geossregistries.info/), OGC Web Services (WCS, WFS, WMS and WPS), OAI Protocol for Metadata Harvesting 2.0, ArcSDE Server and Local File System. Metadata in GEOSS clearinghouse are managed in a database (MySQL, Postgresql, Oracle, or MckoiDB) and an index of the metadata is maintained through Lucene engine. Thus, EO data, services, and related resources can be discovered and accessed. It supports a variety of geospatial standards including CSW and SRU for search, FGDC and ISO metadata, and WMS related OGC standards for data access and visualization, as linked from the metadata.

  20. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and UrbIS

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Hook, L.; Killeffer, T.; Krassovski, M.; Boden, T.; King, A. W.; Wullschleger, S. D.

    2016-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This discussion describes tools being used in two different projects at Oak Ridge National Laboratory (ORNL), but at different stages of the data lifecycle. The Metadata Entry and Data Search Tool is being used for the documentation, archival, and data discovery stages for the Next Generation Ecosystem Experiment - Arctic (NGEE Arctic) project while the Urban Information Systems (UrbIS) Data Catalog is being used to support indexing, cataloging, and searching. The NGEE Arctic Online Metadata Entry Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The UrbIS Data Catalog is a data discovery tool supported by the Mercury cataloging framework [2] which aims to compile urban environmental data from around the world into one location, and be searchable via a user-friendly interface. Each data record conveniently displays its title, source, and date range, and features: (1) a button for a quick view of the metadata, (2) a direct link to the data and, for some data sets, (3) a button for visualizing the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for searching by area. References: [1] Devarakonda, Ranjeet, et al. "Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example." Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015. [2] Devarakonda, R., Palanisamy, G., Wilson, B. E., & Green, J. M. (2010). Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics, 3(1-2), 87-94.

  1. MetaSRA: normalized human sample-specific metadata for the Sequence Read Archive.

    PubMed

    Bernstein, Matthew N; Doan, AnHai; Dewey, Colin N

    2017-09-15

    The NCBI's Sequence Read Archive (SRA) promises great biological insight if one could analyze the data in the aggregate; however, the data remain largely underutilized, in part, due to the poor structure of the metadata associated with each sample. The rules governing submissions to the SRA do not dictate a standardized set of terms that should be used to describe the biological samples from which the sequencing data are derived. As a result, the metadata include many synonyms, spelling variants and references to outside sources of information. Furthermore, manual annotation of the data remains intractable due to the large number of samples in the archive. For these reasons, it has been difficult to perform large-scale analyses that study the relationships between biomolecular processes and phenotype across diverse diseases, tissues and cell types present in the SRA. We present MetaSRA, a database of normalized SRA human sample-specific metadata following a schema inspired by the metadata organization of the ENCODE project. This schema involves mapping samples to terms in biomedical ontologies, labeling each sample with a sample-type category, and extracting real-valued properties. We automated these tasks via a novel computational pipeline. The MetaSRA is available at metasra.biostat.wisc.edu via both a searchable web interface and bulk downloads. Software implementing our computational pipeline is available at http://github.com/deweylab/metasra-pipeline. cdewey@biostat.wisc.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  2. Dataworks for GNSS: Software for Supporting Data Sharing and Federation of Geodetic Networks

    NASA Astrophysics Data System (ADS)

    Boler, F. M.; Meertens, C. M.; Miller, M. M.; Wier, S.; Rost, M.; Matykiewicz, J.

    2015-12-01

    Continuously-operating Global Navigation Satellite System (GNSS) networks are increasingly being installed globally for a wide variety of science and societal applications. GNSS enables Earth science research in areas including tectonic plate interactions, crustal deformation in response to loading by tectonics, magmatism, water and ice, and the dynamics of water - and thereby energy transfer - in the atmosphere at regional scale. The many individual scientists and organizations that set up GNSS stations globally are often open to sharing data, but lack the resources or expertise to deploy systems and software to manage and curate data and metadata and provide user tools that would support data sharing. UNAVCO previously gained experience in facilitating data sharing through the NASA-supported development of the Geodesy Seamless Archive Centers (GSAC) open source software. GSAC provides web interfaces and simple web services for data and metadata discovery and access, supports federation of multiple data centers, and simplifies transfer of data and metadata to long-term archives. The NSF supported the dissemination of GSAC to multiple European data centers forming the European Plate Observing System. To expand upon GSAC to provide end-to-end, instrument-to-distribution capability, UNAVCO developed Dataworks for GNSS with NSF funding to the COCONet project, and deployed this software on systems that are now operating as Regional GNSS Data Centers as part of the NSF-funded TLALOCNet and COCONet projects. Dataworks consists of software modules written in Python and Java for data acquisition, management and sharing. There are modules for GNSS receiver control and data download, a database schema for metadata, tools for metadata handling, ingest software to manage file metadata, data file management scripts, GSAC, scripts for mirroring station data and metadata from partner GSACs, and extensive software and operator documentation. UNAVCO plans to provide a cloud VM image of Dataworks that would allow standing up a Dataworks-enabled GNSS data center without requiring upfront investment in server hardware. By enabling data creators to organize their data and metadata for sharing, Dataworks helps scientists expand their data curation awareness and responsibility, and enhances data access for all.

  3. Using USDA's National Cooperative Soil Survey Soil Characterization Data to detect soil change: A cautionary tale

    USDA-ARS?s Scientific Manuscript database

    Recently, the USDA-NRCS National Cooperative Soil Survey Soil Characterization Database (NSCD) was reported to provide evidence that total nitrogen (TN) stocks of agricultural soils have increased across the Mississippi basin since 1985. Unfortunately, due to omission of metadata from the NSCD, hist...

  4. Self-Assembling Texts & Courses of Study.

    ERIC Educational Resources Information Center

    Gibson, David

    This paper describes the development of an interoperable meta-database system--a system of applications using metadata--that is intended to facilitate learner-centered collaboration, access to learning resources, and the fitness of channels of information to the emerging needs of learners at both individual and group levels. Highlights include:…

  5. Fallon, Nevada FORGE Geodetic Data

    DOE Data Explorer

    Blankenship, Doug; Eneva, Mariana; Hammond, William

    2018-02-01

    Fallon FORGE InSAR and geodetic GPS deformation data. InSAR shapefiles are packaged together as .MPK (ArcMap map package, compatible with other GIS platforms), and as .CSV comma-delimited plaintext. GPS data and additional metadata are linked to the Nevada Geodetic Laboratory database at the Univ. of Nevada, Reno (UNR).

  6. Geologic Communications | Alaska Division of Geological & Geophysical

    Science.gov Websites

    improves a database for the Division's digital and map-based geological, geophysical, and geochemical data interfaces DGGS metadata and digital data distribution - Geospatial datasets published by DGGS are designed to be compatible with a broad variety of digital mapping software, to present DGGS's geospatial data

  7. An online database of nuclear electromagnetic moments

    NASA Astrophysics Data System (ADS)

    Mertzimekis, T. J.; Stamou, K.; Psaltis, A.

    2016-01-01

    Measurements of nuclear magnetic dipole and electric quadrupole moments are considered quite important for the understanding of nuclear structure both near and far from the valley of stability. The recent advent of radioactive beams has resulted in a plethora of new, continuously flowing, experimental data on nuclear structure - including nuclear moments - which hinders the information management. A new, dedicated, public and user friendly online database (http://magneticmoments.info) has been created comprising experimental data of nuclear electromagnetic moments. The present database supersedes existing printed compilations, including also non-evaluated series of data and relevant meta-data, while putting strong emphasis on bimonthly updates. The scope, features and extensions of the database are reported.

  8. OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian

    2016-04-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the presentation is to acquaint the audience with OSCAR, the WMDS and the current XML schema; and, to explore the relationship to the INSPIRE XML schema. Feedback from experts in the various disciplines of meteorology, climatology, atmospheric chemistry, hydrology on the utility of the new standard and the XML schema will be solicited and will guide WMO in further evolving the WMDS.

  9. Extending the Reach of IGSN Beyond Earth: Implementing IGSN Registration to Link NASA's Apollo Lunar Samples and their Data

    NASA Astrophysics Data System (ADS)

    Todd, N. S.

    2016-12-01

    The rock and soil samples returned from the Apollo missions from 1969-72 have supported 46 years of research leading to advances in our understanding of the formation and evolution of the inner Solar System. NASA has been engaged in several initiatives that aim to restore, digitize, and make available to the public existing published and unpublished research data for the Apollo samples. One of these initiatives is a collaboration with IEDA (Interdisciplinary Earth Data Alliance) to develop MoonDB, a lunar geochemical database modeled after PetDB. In support of this initiative, NASA has adopted the use of IGSN (International Geo Sample Number) to generate persistent, unique identifiers for lunar samples that scientists can use when publishing research data. To facilitate the IGSN registration of the original 2,200 samples and over 120,000 subdivided samples, NASA has developed an application that retrieves sample metadata from the Lunar Curation Database and uses the SESAR API to automate the generation of IGSNs and registration of samples into SESAR (System for Earth Sample Registration). This presentation will describe the work done by NASA to map existing sample metadata to the IGSN metadata and integrate the IGSN registration process into the sample curation workflow, the lessons learned from this effort, and how this work can be extended in the future to help deal with the registration of large numbers of samples.

  10. The ATLAS Eventlndex: data flow and inclusion of other metadata

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  11. The eGenVar data management system—cataloguing and sharing sensitive data and metadata for the life sciences

    PubMed Central

    Razick, Sabry; Močnik, Rok; Thomas, Laurent F.; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål

    2014-01-01

    Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/ PMID:24682735

  12. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  13. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions.

    PubMed

    Karr, Jonathan R; Phillips, Nolan C; Covert, Markus W

    2014-01-01

    Mechanistic 'whole-cell' models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. http://www.wholecellsimdb.org SOURCE CODE REPOSITORY: URL: http://github.com/CovertLab/WholeCellSimDB. © The Author(s) 2014. Published by Oxford University Press.

  14. Overview of long-term field experiments in Germany - metadata visualization

    NASA Astrophysics Data System (ADS)

    Muqit Zoarder, Md Abdul; Heinrich, Uwe; Svoboda, Nikolai; Grosse, Meike; Hierold, Wilfried

    2017-04-01

    BonaRes ("soil as a sustainable resource for the bioeconomy") is conducting to collect data and metadata of agricultural long-term field experiments (LTFE) of Germany. It is funded by the German Federal Ministry of Education and Research (BMBF) under the umbrella of the National Research Strategy BioEconomy 2030. BonaRes consists of ten interdisciplinary research project consortia and the 'BonaRes - Centre for Soil Research'. BonaRes Data Centre is responsible for collecting all LTFE data and regarding metadata into an enterprise database upon higher level of security and visualization of the data and metadata through data portal. In the frame of the BonaRes project, we are compiling an overview of long-term field experiments in Germany that is based on a literature review, the results of the online survey and direct contacts with LTFE operators. Information about research topic, contact person, website, experiment setup and analyzed parameters are collected. Based on the collected LTFE data, an enterprise geodatabase is developed and a GIS-based web-information system about LTFE in Germany is also settled. Various aspects of the LTFE, like experiment type, land-use type, agricultural category and duration of experiment, are presented in thematic maps. This information system is dynamically linked to the database, which means changes in the data directly affect the presentation. An easy data searching option using LTFE name, -location or -operators and the dynamic layer selection ensure a user-friendly web application. Dispersion and visualization of the overlapping LTFE points on the overview map are also challenging and we make it automatized at very zoom level which is also a consistent part of this application. The application provides both, spatial location and meta-information of LTFEs, which is backed-up by an enterprise geodatabase, GIS server for hosting map services and Java script API for web application development.

  15. The IAGOS Information System

    NASA Astrophysics Data System (ADS)

    Boulanger, D.; Thouret, V.

    2016-12-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft and do measurements of aerosols, cloud particles, greenhouse gases, ozone, water vapor and nitrogen oxides from the surface to the lower stratosphere. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core and IAGOS-CARIBIC data. The IAGOS Data Portal (http://www.iagos.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). In 2016 the new IAGOS Data Portal has been released. In addition to the data download the portal provides improved and new services such as download in NetCDF or NASA Ames formats and plotting tools (maps, time series, vertical profiles). New added value products are available through the portal: back trajectories, origin of air masses, co-location with satellite data. Web services allow to download IAGOS metadata such as flights and airports information. Administration tools have been implemented for users management and instruments monitoring. A major improvement is the interoperability with international portals and other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse, the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de) and the CAMS (Copernicus Atmosphere Monitoring Service) data center in Jülich (http://join.iek.fz-juelich.de). The link with the CAMS data center, through the JOIN interface, allows to combine model outputs with IAGOS data for inter-comparison. The CAMS project is a prominent user of the IGAS data network. Duting the next year IAGOS will improve metadata standardization and dissemination through different collaborations with the AERIS data center, GAW for which IAGOS is a contributing network and the ENVRI+ European project. Measurements traceability and quality metadata will be available and DOI will be implemented.

  16. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  17. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    PubMed

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  18. Bookshelf: a simple curation system for the storage of biomolecular simulation data

    PubMed Central

    Vohra, Shabana; Hall, Benjamin A.; Holdbrook, Daniel A.; Khalid, Syma; Biggin, Philip C.

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call ‘Bookshelf’, that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/ PMID:21169341

  19. Content-Based Management of Image Databases in the Internet Age

    ERIC Educational Resources Information Center

    Kleban, James Theodore

    2010-01-01

    The Internet Age has seen the emergence of richly annotated image data collections numbering in the billions of items. This work makes contributions in three primary areas which aid the management of this data: image representation, efficient retrieval, and annotation based on content and metadata. The contributions are as follows. First,…

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  2. GEONETCast Americas

    Science.gov Websites

    unless the user has internet access on the same machine. The products, including metadata, that are on is an access portal to GEONETCast products. It is a searchable database that can be found at -channel. This version will run on a local computer at a user site but internet links will not function

  3. Genomes OnLine Database (GOLD) v.6: data updates and feature enhancements

    PubMed Central

    Mukherjee, Supratim; Stamatis, Dimitri; Bertsch, Jon; Ovchinnikova, Galina; Verezemska, Olena; Isbandi, Michelle; Thomas, Alex D.; Ali, Rida; Sharma, Kaushal; Kyrpides, Nikos C.; Reddy, T. B. K.

    2017-01-01

    The Genomes Online Database (GOLD) (https://gold.jgi.doe.gov) is a manually curated data management system that catalogs sequencing projects with associated metadata from around the world. In the current version of GOLD (v.6), all projects are organized based on a four level classification system in the form of a Study, Organism (for isolates) or Biosample (for environmental samples), Sequencing Project and Analysis Project. Currently, GOLD provides information for 26 117 Studies, 239 100 Organisms, 15 887 Biosamples, 97 212 Sequencing Projects and 78 579 Analysis Projects. These are integrated with over 312 metadata fields from which 58 are controlled vocabularies with 2067 terms. The web interface facilitates submission of a diverse range of Sequencing Projects (such as isolate genome, single-cell genome, metagenome, metatranscriptome) and complex Analysis Projects (such as genome from metagenome, or combined assembly from multiple Sequencing Projects). GOLD provides a seamless interface with the Integrated Microbial Genomes (IMG) system and supports and promotes the Genomic Standards Consortium (GSC) Minimum Information standards. This paper describes the data updates and additional features added during the last two years. PMID:27794040

  4. Minimal Information for Neural Electromagnetic Ontologies (MINEMO): A standards-compliant method for analysis and integration of event-related potentials (ERP) data

    PubMed Central

    Frishkoff, Gwen; Sydes, Jason; Mueller, Kurt; Frank, Robert; Curran, Tim; Connolly, John; Kilborn, Kerry; Molfese, Dennis; Perfetti, Charles; Malony, Allen

    2011-01-01

    We present MINEMO (Minimal Information for Neural ElectroMagnetic Ontologies), a checklist for the description of event-related potentials (ERP) studies. MINEMO extends MINI (Minimal Information for Neuroscience Investigations)to the ERP domain. Checklist terms are explicated in NEMO, a formal ontology that is designed to support ERP data sharing and integration. MINEMO is also linked to an ERP database and web application (the NEMO portal). Users upload their data and enter MINEMO information through the portal. The database then stores these entries in RDF (Resource Description Framework), along with summary metrics, i.e., spatial and temporal metadata. Together these spatial, temporal, and functional metadata provide a complete description of ERP data and the context in which these data were acquired. The RDF files then serve as inputs to ontology-based labeling and meta-analysis. Our ultimate goal is to represent ERPs using a rich semantic structure, so results can be queried at multiple levels, to stimulate novel hypotheses and to promote a high-level, integrative account of ERP results across diverse study methods and paradigms. PMID:22180824

  5. Petaminer: Using ROOT for efficient data storage in MySQL database

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.

    2010-04-01

    High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.

  6. On-the-fly form generation and on-line metadata configuration--a clinical data management Web infrastructure in Java.

    PubMed

    Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R

    2006-01-01

    In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.

  7. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    NASA Astrophysics Data System (ADS)

    Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  8. Sequencing Data Discovery and Integration for Earth System Science with MetaSeek

    NASA Astrophysics Data System (ADS)

    Hoarfrost, A.; Brown, N.; Arnosti, C.

    2017-12-01

    Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.

  9. Study on Information Management for the Conservation of Traditional Chinese Architectural Heritage - 3d Modelling and Metadata Representation

    NASA Astrophysics Data System (ADS)

    Yen, Y. N.; Weng, K. H.; Huang, H. Y.

    2013-07-01

    After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.

  10. Using the Proteomics Identifications Database (PRIDE).

    PubMed

    Martens, Lennart; Jones, Phil; Côté, Richard

    2008-03-01

    The Proteomics Identifications Database (PRIDE) is a public data repository designed to store, disseminate, and analyze mass spectrometry based proteomics datasets. The PRIDE database can accommodate any level of detailed metadata about the submitted results, which can be queried, explored, viewed, or downloaded via the PRIDE Web interface. The PRIDE database also provides a simple, yet powerful, access control mechanism that fully supports confidential peer-reviewing of data related to a manuscript, ensuring that these results remain invisible to the general public while allowing referees and journal editors anonymized access to the data. This unit describes in detail the functionality that PRIDE provides with regards to searching, viewing, and comparing the available data, as well as different options for submitting data to PRIDE.

  11. Developing a Data Discovery Tool for Interdisciplinary Science: Leveraging a Web-based Mapping Application and Geosemantic Searching

    NASA Astrophysics Data System (ADS)

    Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.

    2015-12-01

    The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship requirements, but can provide a template for others to follow.

  12. Integrating TRENCADIS components in gLite to share DICOM medical images and structured reports.

    PubMed

    Blanquer, Ignacio; Hernández, Vicente; Salavert, José; Segrelles, Damià

    2010-01-01

    The problem of sharing medical information among different centres has been tackled by many projects. Several of them target the specific problem of sharing DICOM images and structured reports (DICOM-SR), such as the TRENCADIS project. In this paper we propose sharing and organizing DICOM data and DICOM-SR metadata benefiting from the existent deployed Grid infrastructures compliant with gLite such as EGEE or the Spanish NGI. These infrastructures contribute with a large amount of storage resources for creating knowledge databases and also provide metadata storage resources (such as AMGA) to semantically organize reports in a tree-structure. First, in this paper, we present the extension of TRENCADIS architecture to use gLite components (LFC, AMGA, SE) on the shake of increasing interoperability. Using the metadata from DICOM-SR, and maintaining its tree structure, enables federating different but compatible diagnostic structures and simplifies the definition of complex queries. This article describes how to do this in AMGA and it shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources.

  13. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission.

    PubMed

    Hankeln, Wolfgang; Buttigieg, Pier Luigi; Fink, Dennis; Kottmann, Renzo; Yilmaz, Pelin; Glöckner, Frank Oliver

    2010-06-30

    Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft Excel spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data.

  14. MetaBar - a tool for consistent contextual data acquisition and standards compliant submission

    PubMed Central

    2010-01-01

    Background Environmental sequence datasets are increasing at an exponential rate; however, the vast majority of them lack appropriate descriptors like sampling location, time and depth/altitude: generally referred to as metadata or contextual data. The consistent capture and structured submission of these data is crucial for integrated data analysis and ecosystems modeling. The application MetaBar has been developed, to support consistent contextual data acquisition. Results MetaBar is a spreadsheet and web-based software tool designed to assist users in the consistent acquisition, electronic storage, and submission of contextual data associated to their samples. A preconfigured Microsoft® Excel® spreadsheet is used to initiate structured contextual data storage in the field or laboratory. Each sample is given a unique identifier and at any stage the sheets can be uploaded to the MetaBar database server. To label samples, identifiers can be printed as barcodes. An intuitive web interface provides quick access to the contextual data in the MetaBar database as well as user and project management capabilities. Export functions facilitate contextual and sequence data submission to the International Nucleotide Sequence Database Collaboration (INSDC), comprising of the DNA DataBase of Japan (DDBJ), the European Molecular Biology Laboratory database (EMBL) and GenBank. MetaBar requests and stores contextual data in compliance to the Genomic Standards Consortium specifications. The MetaBar open source code base for local installation is available under the GNU General Public License version 3 (GNU GPL3). Conclusion The MetaBar software supports the typical workflow from data acquisition and field-sampling to contextual data enriched sequence submission to an INSDC database. The integration with the megx.net marine Ecological Genomics database and portal facilitates georeferenced data integration and metadata-based comparisons of sampling sites as well as interactive data visualization. The ample export functionalities and the INSDC submission support enable exchange of data across disciplines and safeguarding contextual data. PMID:20591175

  15. The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.

    2017-12-01

    The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH RepositoryA repository is under development for NCAR datasets currently not in existing lab-managed archives. The DASH repository will be under NCAR governance and meet Trustworthy Repositories Audit & Certification (TRAC) requirements. This poster will highlight the processes, lessons learned, and current status of the DASH effort at NCAR.

  16. An asynchronous traversal engine for graph-based rich metadata management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Carns, Philip; Ross, Robert B.

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less

  17. An asynchronous traversal engine for graph-based rich metadata management

    DOE PAGES

    Dai, Dong; Carns, Philip; Ross, Robert B.; ...

    2016-06-23

    Rich metadata in high-performance computing (HPC) systems contains extended information about users, jobs, data files, and their relationships. Property graphs are a promising data model to represent heterogeneous rich metadata flexibly. Specifically, a property graph can use vertices to represent different entities and edges to record the relationships between vertices with unique annotations. The high-volume HPC use case, with millions of entities and relationships, naturally requires an out-of-core distributed property graph database, which must support live updates (to ingest production information in real time), low-latency point queries (for frequent metadata operations such as permission checking), and large-scale traversals (for provenancemore » data mining). Among these needs, large-scale property graph traversals are particularly challenging for distributed graph storage systems. Most existing graph systems implement a "level synchronous" breadth-first search algorithm that relies on global synchronization in each traversal step. This performs well in many problem domains; but a rich metadata management system is characterized by imbalanced graphs, long traversal lengths, and concurrent workloads, each of which has the potential to introduce or exacerbate stragglers (i.e., abnormally slow steps or servers in a graph traversal) that lead to low overall throughput for synchronous traversal algorithms. Previous research indicated that the straggler problem can be mitigated by using asynchronous traversal algorithms, and many graph-processing frameworks have successfully demonstrated this approach. Such systems require the graph to be loaded into a separate batch-processing framework instead of being iteratively accessed, however. In this work, we investigate a general asynchronous graph traversal engine that can operate atop a rich metadata graph in its native format. We outline a traversal-aware query language and key optimizations (traversal-affiliate caching and execution merging) necessary for efficient performance. We further explore the effect of different graph partitioning strategies on the traversal performance for both synchronous and asynchronous traversal engines. Our experiments show that the asynchronous graph traversal engine is more efficient than its synchronous counterpart in the case of HPC rich metadata processing, where more servers are involved and larger traversals are needed. Furthermore, the asynchronous traversal engine is more adaptive to different graph partitioning strategies.« less

  18. DialysisNet: Application for Integrating and Management Data Sources of Hemodialysis Information by Continuity of Care Record.

    PubMed

    Ku, Ho Suk; Kim, Sungho; Kim, HyeHyeon; Chung, Hee-Joon; Park, Yu Rang; Kim, Ju Han

    2014-04-01

    Health Avatar Beans was for the management of chronic kidney disease and end-stage renal disease (ESRD). This article is about the DialysisNet system in Health Avatar Beans for the seamless management of ESRD based on the personal health record. For hemodialysis data modeling, we identified common data elements for hemodialysis information (CDEHI). We used ASTM continuity of care record (CCR) and ISO/IEC 11179 for the compliance method with a standard model for the CDEHI. According to the contents of the ASTM CCR, we mapped the CDHEI to the contents and created the metadata from that. It was transformed and parsed into the database and verified according to the ASTM CCR/XML schema definition (XSD). DialysisNet was created as an iPad application. The contents of the CDEHI were categorized for effective management. For the evaluation of information transfer, we used CarePlatform, which was developed for data access. The metadata of CDEHI in DialysisNet was exchanged by the CarePlatform with semantic interoperability. The CDEHI was separated into a content list for individual patient data, a contents list for hemodialysis center data, consultation and transfer form, and clinical decision support data. After matching to the CCR, the CDEHI was transformed to metadata, and it was transformed to XML and proven according to the ASTM CCR/XSD. DialysisNet has specific consideration of visualization, graphics, images, statistics, and database. We created the DialysisNet application, which can integrate and manage data sources for hemodialysis information based on CCR standards.

  19. BioSharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences.

    PubMed

    McQuilton, Peter; Gonzalez-Beltran, Alejandra; Rocca-Serra, Philippe; Thurston, Milo; Lister, Allyson; Maguire, Eamonn; Sansone, Susanna-Assunta

    2016-01-01

    BioSharing (http://www.biosharing.org) is a manually curated, searchable portal of three linked registries. These resources cover standards (terminologies, formats and models, and reporting guidelines), databases, and data policies in the life sciences, broadly encompassing the biological, environmental and biomedical sciences. Launched in 2011 and built by the same core team as the successful MIBBI portal, BioSharing harnesses community curation to collate and cross-reference resources across the life sciences from around the world. BioSharing makes these resources findable and accessible (the core of the FAIR principle). Every record is designed to be interlinked, providing a detailed description not only on the resource itself, but also on its relations with other life science infrastructures. Serving a variety of stakeholders, BioSharing cultivates a growing community, to which it offers diverse benefits. It is a resource for funding bodies and journal publishers to navigate the metadata landscape of the biological sciences; an educational resource for librarians and information advisors; a publicising platform for standard and database developers/curators; and a research tool for bench and computer scientists to plan their work. BioSharing is working with an increasing number of journals and other registries, for example linking standards and databases to training material and tools. Driven by an international Advisory Board, the BioSharing user-base has grown by over 40% (by unique IP address), in the last year thanks to successful engagement with researchers, publishers, librarians, developers and other stakeholders via several routes, including a joint RDA/Force11 working group and a collaboration with the International Society for Biocuration. In this article, we describe BioSharing, with a particular focus on community-led curation.Database URL: https://www.biosharing.org. © The Author(s) 2016. Published by Oxford University Press.

  20. Web Services as Building Blocks for an Open Coastal Observing System

    NASA Astrophysics Data System (ADS)

    Breitbach, G.; Krasemann, H.

    2012-04-01

    In coastal observing systems it is needed to integrate different observing methods like remote sensing, in-situ measurements, and models into a synoptic view of the state of the observed region. This integration can be based solely on web services combining data and metadata. Such an approach is pursued for COSYNA (Coastal Observing System for Northern and Artic seas). Data from satellite and radar remote sensing, measurements of buoys, stations and Ferryboxes are the observation part of COSYNA. These data are assimilated into models to create pre-operational forecasts. For discovering data an OGC Web Feature Service (WFS) is used by the COSYNA data portal. This Web Feature Service knows the necessary metadata not only for finding data, but in addition the URLs of web services to view and download the data. To make the data from different resources comparable a common vocabulary is needed. For COSYNA the standard names from CF-conventions are stored within the metadata whenever possible. For the metadata an INSPIRE and ISO19115 compatible data format is used. The WFS is fed from the metadata-system using database-views. Actual data are stored in two different formats, in NetCDF-files for gridded data and in an RDBMS for time-series-like data. The web service URLs are mostly standard based the standards are mainly OGC standards. Maps were created from netcdf files with the help of the ncWMS tool whereas a self-developed java servlet is used for maps of moving measurement platforms. In this case download of data is offered via OGC SOS. For NetCDF-files OPeNDAP is used for the data download. The OGC CSW is used for accessing extended metadata. The concept of data management in COSYNA will be presented which is independent of the special services used in COSYNA. This concept is parameter and data centric and might be useful for other observing systems.

  1. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  2. Embracing the Archives: How NPR Librarians Turned Their Collection into a Workflow Tool

    ERIC Educational Resources Information Center

    Sin, Lauren; Daugert, Katie

    2013-01-01

    Several years ago, National Public Radio (NPR) librarians began developing a new content management system (CMS). It was intended to offer desktop access for all NPR-produced content, including transcripts, audio, and metadata. Fast-forward to 2011, and their shiny, new database, Artemis, was ready for debut. Their next challenge: to teach a staff…

  3. Documentation of the U.S. Geological Survey Stress and Sediment Mobility Database

    USGS Publications Warehouse

    Dalyander, P. Soupy; Butman, Bradford; Sherwood, Christopher R.; Signell, Richard P.

    2012-01-01

    The U.S. Geological Survey Sea Floor Stress and Sediment Mobility Database contains estimates of bottom stress and sediment mobility for the U.S. continental shelf. This U.S. Geological Survey database provides information that is needed to characterize sea floor ecosystems and evaluate areas for human use. The estimates contained in the database are designed to spatially and seasonally resolve the general characteristics of bottom stress over the U.S. continental shelf and to estimate sea floor mobility by comparing critical stress thresholds based on observed sediment texture data to the modeled stress. This report describes the methods used to make the bottom stress and mobility estimates, statistics used to characterize stress and mobility, data validation procedures, and the metadata for each dataset and provides information on how to access the database online.

  4. Tidying Up International Nucleotide Sequence Databases: Ecological, Geographical and Sequence Quality Annotation of ITS Sequences of Mycorrhizal Fungi

    PubMed Central

    Tedersoo, Leho; Abarenkov, Kessy; Nilsson, R. Henrik; Schüssler, Arthur; Grelet, Gwen-Aëlle; Kohout, Petr; Oja, Jane; Bonito, Gregory M.; Veldre, Vilmar; Jairus, Teele; Ryberg, Martin; Larsson, Karl-Henrik; Kõljalg, Urmas

    2011-01-01

    Sequence analysis of the ribosomal RNA operon, particularly the internal transcribed spacer (ITS) region, provides a powerful tool for identification of mycorrhizal fungi. The sequence data deposited in the International Nucleotide Sequence Databases (INSD) are, however, unfiltered for quality and are often poorly annotated with metadata. To detect chimeric and low-quality sequences and assign the ectomycorrhizal fungi to phylogenetic lineages, fungal ITS sequences were downloaded from INSD, aligned within family-level groups, and examined through phylogenetic analyses and BLAST searches. By combining the fungal sequence database UNITE and the annotation and search tool PlutoF, we also added metadata from the literature to these accessions. Altogether 35,632 sequences belonged to mycorrhizal fungi or originated from ericoid and orchid mycorrhizal roots. Of these sequences, 677 were considered chimeric and 2,174 of low read quality. Information detailing country of collection, geographical coordinates, interacting taxon and isolation source were supplemented to cover 78.0%, 33.0%, 41.7% and 96.4% of the sequences, respectively. These annotated sequences are publicly available via UNITE (http://unite.ut.ee/) for downstream biogeographic, ecological and taxonomic analyses. In European Nucleotide Archive (ENA; http://www.ebi.ac.uk/ena/), the annotated sequences have a special link-out to UNITE. We intend to expand the data annotation to additional genes and all taxonomic groups and functional guilds of fungi. PMID:21949797

  5. [Construction of chemical information database based on optical structure recognition technique].

    PubMed

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.

  6. Building a multi-scaled geospatial temporal ecology database from disparate data sources: fostering open science and data reuse.

    PubMed

    Soranno, Patricia A; Bissell, Edward G; Cheruvelil, Kendra S; Christel, Samuel T; Collins, Sarah M; Fergus, C Emi; Filstrup, Christopher T; Lapierre, Jean-Francois; Lottig, Noah R; Oliver, Samantha K; Scott, Caren E; Smith, Nicole J; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A; Gries, Corinna; Henry, Emily N; Skaff, Nick K; Stanley, Emily H; Stow, Craig A; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km(2)). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  7. Building a multi-scaled geospatial temporal ecology database from disparate data sources: Fostering open science through data reuse

    USGS Publications Warehouse

    Soranno, Patricia A.; Bissell, E.G.; Cheruvelil, Kendra S.; Christel, Samuel T.; Collins, Sarah M.; Fergus, C. Emi; Filstrup, Christopher T.; Lapierre, Jean-Francois; Lotting, Noah R.; Oliver, Samantha K.; Scott, Caren E.; Smith, Nicole J.; Stopyak, Scott; Yuan, Shuai; Bremigan, Mary Tate; Downing, John A.; Gries, Corinna; Henry, Emily N.; Skaff, Nick K.; Stanley, Emily H.; Stow, Craig A.; Tan, Pang-Ning; Wagner, Tyler; Webster, Katherine E.

    2015-01-01

    Although there are considerable site-based data for individual or groups of ecosystems, these datasets are widely scattered, have different data formats and conventions, and often have limited accessibility. At the broader scale, national datasets exist for a large number of geospatial features of land, water, and air that are needed to fully understand variation among these ecosystems. However, such datasets originate from different sources and have different spatial and temporal resolutions. By taking an open-science perspective and by combining site-based ecosystem datasets and national geospatial datasets, science gains the ability to ask important research questions related to grand environmental challenges that operate at broad scales. Documentation of such complicated database integration efforts, through peer-reviewed papers, is recommended to foster reproducibility and future use of the integrated database. Here, we describe the major steps, challenges, and considerations in building an integrated database of lake ecosystems, called LAGOS (LAke multi-scaled GeOSpatial and temporal database), that was developed at the sub-continental study extent of 17 US states (1,800,000 km2). LAGOS includes two modules: LAGOSGEO, with geospatial data on every lake with surface area larger than 4 ha in the study extent (~50,000 lakes), including climate, atmospheric deposition, land use/cover, hydrology, geology, and topography measured across a range of spatial and temporal extents; and LAGOSLIMNO, with lake water quality data compiled from ~100 individual datasets for a subset of lakes in the study extent (~10,000 lakes). Procedures for the integration of datasets included: creating a flexible database design; authoring and integrating metadata; documenting data provenance; quantifying spatial measures of geographic data; quality-controlling integrated and derived data; and extensively documenting the database. Our procedures make a large, complex, and integrated database reproducible and extensible, allowing users to ask new research questions with the existing database or through the addition of new data. The largest challenge of this task was the heterogeneity of the data, formats, and metadata. Many steps of data integration need manual input from experts in diverse fields, requiring close collaboration.

  8. SIOExplorer: Modern IT Methods and Tools for Digital Library Management

    NASA Astrophysics Data System (ADS)

    Sutton, D. W.; Helly, J.; Miller, S.; Chase, A.; Clarck, D.

    2003-12-01

    With more geoscience disciplines becoming data-driven it is increasingly important to utilize modern techniques for data, information and knowledge management. SIOExplorer is a new digital library project with 2 terabytes of oceanographic data collected over the last 50 years on 700 cruises by the Scripps Institution of Oceanography. It is built using a suite of information technology tools and methods that allow for an efficient and effective digital library management system. The library consists of a number of independent collections, each with corresponding metadata formats. The system architecture allows each collection to be built and uploaded based on a collection dependent metadata template file (MTF). This file is used to create the hierarchical structure of the collection, create metadata tables in a relational database, and to populate object metadata files and the collection as a whole. Collections are comprised of arbitrary digital objects stored at the San Diego Supercomputer Center (SDSC) High Performance Storage System (HPSS) and managed using the Storage Resource Broker (SRB), data handling middle ware developed at SDSC. SIOExplorer interoperates with other collections as a data provider through the Open Archives Initiative (OAI) protocol. The user services for SIOExplorer are accessed from CruiseViewer, a Java application served using Java Web Start from the SIOExplorer home page. CruiseViewer is an advanced tool for data discovery and access. It implements general keyword and interactive geospatial search methods for the collections. It uses a basemap to georeference search results on user selected basemaps such as global topography or crustal age. User services include metadata viewing, opening of selective mime type digital objects (such as images, documents and grid files), and downloading of objects (including the brokering of proprietary hold restrictions).

  9. Building a High Performance Metadata Broker using Clojure, NoSQL and Message Queues

    NASA Astrophysics Data System (ADS)

    Truslove, I.; Reed, S.

    2013-12-01

    In practice, Earth and Space Science Informatics often relies on getting more done with less: fewer hardware resources, less IT staff, fewer lines of code. As a capacity-building exercise focused on rapid development of high-performance geoinformatics software, the National Snow and Ice Data Center (NSIDC) built a prototype metadata brokering system using a new JVM language, modern database engines and virtualized or cloud computing resources. The metadata brokering system was developed with the overarching goals of (i) demonstrating a technically viable product with as little development effort as possible, (ii) using very new yet very popular tools and technologies in order to get the most value from the least legacy-encumbered code bases, and (iii) being a high-performance system by using scalable subcomponents, and implementation patterns typically used in web architectures. We implemented the system using the Clojure programming language (an interactive, dynamic, Lisp-like JVM language), Redis (a fast in-memory key-value store) as both the data store for original XML metadata content and as the provider for the message queueing service, and ElasticSearch for its search and indexing capabilities to generate search results. On evaluating the results of the prototyping process, we believe that the technical choices did in fact allow us to do more for less, due to the expressive nature of the Clojure programming language and its easy interoperability with Java libraries, and the successful reuse or re-application of high performance products or designs. This presentation will describe the architecture of the metadata brokering system, cover the tools and techniques used, and describe lessons learned, conclusions, and potential next steps.

  10. Design and implementation of a health data interoperability mediator.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre William; Borycki, Elizabeth Marie

    2010-01-01

    The objective of this study is to design and implement a common-gateway oriented mediator to solve the health data interoperability problems that exist among heterogeneous health information systems. The proposed mediator has three main components: (1) a Synonym Dictionary (SD) that stores a set of global metadata and terminologies to serve as the mapping intermediary, (2) a Semantic Mapping Engine (SME) that can be used to map metadata and instance semantics, and (3) a DB-to-XML module that translates source health data stored in a database into XML format and back. A routine admission notification data exchange scenario is used to test the efficiency and feasibility of the proposed mediator. The study results show that the proposed mediator can make health information exchange more efficient.

  11. Designing for Peta-Scale in the LSST Database

    NASA Astrophysics Data System (ADS)

    Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.

    2007-10-01

    The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.

  12. panMetaDocs, eSciDoc, and DOIDB - an infrastructure for the curation and publication of file-based datasets for 'GFZ Data Services'

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland; Klump, Jens

    2016-04-01

    With the foundation of DataCite in 2009 and the technical infrastructure installed in the last six years it has become very easy to create citable dataset DOIs. Nowadays, dataset DOIs are increasingly accepted and required by journals in reference lists of manuscripts. In addition, DataCite provides usage statistics [1] of assigned DOIs and offers a public search API to make research data count. By linking related information to the data, they become more useful for future generations of scientists. For this purpose, several identifier systems, as ISBN for books, ISSN for journals, DOI for articles or related data, Orcid for authors, and IGSN for physical samples can be attached to DOIs using the DataCite metadata schema [2]. While these are good preconditions to publish data, free and open solutions that help with the curation of data, the publication of research data, and the assignment of DOIs in one software seem to be rare. At GFZ Potsdam we built a modular software stack that is made of several free and open software solutions and we established 'GFZ Data Services'. 'GFZ Data Services' provides storage, a metadata editor for publication and a facility to moderate minted DOIs. All software solutions are connected through web APIs, which makes it possible to reuse and integrate established software. Core component of 'GFZ Data Services' is an eSciDoc [3] middleware that is used as central storage, and has been designed along the OAIS reference model for digital preservation. Thus, data are stored in self-contained packages that are made of binary file-based data and XML-based metadata. The eSciDoc infrastructure provides access control to data and it is able to handle half-open datasets, which is useful in embargo situations when a subset of the research data are released after an adequate period. The data exchange platform panMetaDocs [4] makes use of eSciDoc's REST API to upload file-based data into eSciDoc and uses a metadata editor [5] to annotate the files with metadata. The metadata editor has a user-friendly interface with nominal lists, extensive explanations, and an interactive mapping tool to provide assistance to scientists describing the data. It is possible to deposit metadata templates to fill certain fields with default values. The metadata editor generates metadata in the schemas ISO19139, NASA GCMD DIF, and DataCite and could be extended for other schemas. panMetaDocs is able to mint dataset DOIs through DOIDB, which is our component to moderate dataset DOIs issued through 'GFZ Data Services'. DOIDB accepts metadata in the schemas ISO19139, DIF, and DataCite. In addition, DOIDB provides an OAI-PMH interface to disseminate all deposited metadata to data portals. The presentation of datasets on DOI landing pages is done though XSLT stylesheet transformation of the XML-based metadata. The landing pages have been designed to meet needs of scientists. We are able to render the metadata to different layouts. Furthermore, additional information about datasets and publications is assembled into the webpage by querying public databases on the internet. The work presented here will focus on technical details of the software stack. [1] http://stats.datacite.org [2] http://www.dlib.org/dlib/january11/starr/01starr.html [3] http://www.escidoc.org [4] http://panmetadocs.sf.net [5] http://github.com/ulbricht

  13. Comment on "Unique in the shopping mall: On the reidentifiability of credit card metadata".

    PubMed

    Sánchez, David; Martínez, Sergio; Domingo-Ferrer, Josep

    2016-03-18

    De Montjoye et al. (Reports, 30 January 2015, p. 536) claimed that most individuals can be reidentified from a deidentified transaction database and that anonymization mechanisms are not effective against reidentification. We demonstrate that anonymization can be performed by techniques well established in the literature. Copyright © 2016, American Association for the Advancement of Science.

  14. Digital asset management.

    PubMed

    Humphrey, Clinton D; Tollefson, Travis T; Kriet, J David

    2010-05-01

    Facial plastic surgeons are accumulating massive digital image databases with the evolution of photodocumentation and widespread adoption of digital photography. Managing and maximizing the utility of these vast data repositories, or digital asset management (DAM), is a persistent challenge. Developing a DAM workflow that incorporates a file naming algorithm and metadata assignment will increase the utility of a surgeon's digital images. Copyright 2010 Elsevier Inc. All rights reserved.

  15. IMG/VR: a database of cultured and uncultured DNA Viruses and retroviruses

    DOE PAGES

    Paez-Espino, David; Chen, I. -Min A.; Palaniappan, Krishna; ...

    2016-10-30

    Viruses represent the most abundant life forms on the planet. Recent experimental and computational improvements have led to a dramatic increase in the number of viral genome sequences identified primarily from metagenomic samples. As a result of the expanding catalog of metagenomic viral sequences, there exists a need for a comprehensive computational platform integrating all these sequences with associated metadata and analytical tools. Here we present IMG/VR (https://img.jgi.doe.gov/vr/), the largest publicly available database of 3908 isolate reference DNA viruses with 264 413 computationally identified viral contigs from > 6000 ecologically diverse metagenomic samples. Approximately half of the viral contigs aremore » grouped into genetically distinct quasi-species clusters. Microbial hosts are predicted for 20 000 viral sequences, revealing nine microbial phyla previously unreported to be infected by viruses. Viral sequences can be queried using a variety of associated metadata, including habitat type and geographic location of the samples, or taxonomic classification according to hallmark viral genes. IMG/VR has a user-friendly interface that allows users to interrogate all integrated data and interact by comparingwith external sequences, thus serving as an essential resource in the viral genomics community.« less

  16. fusionDB: assessing microbial diversity and environmental preferences via functional similarity networks

    PubMed Central

    Zhu, Chengsheng; Miller, Maximilian

    2018-01-01

    Abstract Microbial functional diversification is driven by environmental factors, i.e. microorganisms inhabiting the same environmental niche tend to be more functionally similar than those from different environments. In some cases, even closely phylogenetically related microbes differ more across environments than across taxa. While microbial similarities are often reported in terms of taxonomic relationships, no existing databases directly link microbial functions to the environment. We previously developed a method for comparing microbial functional similarities on the basis of proteins translated from their sequenced genomes. Here, we describe fusionDB, a novel database that uses our functional data to represent 1374 taxonomically distinct bacteria annotated with available metadata: habitat/niche, preferred temperature, and oxygen use. Each microbe is encoded as a set of functions represented by its proteome and individual microbes are connected via common functions. Users can search fusionDB via combinations of organism names and metadata. Moreover, the web interface allows mapping new microbial genomes to the functional spectrum of reference bacteria, rendering interactive similarity networks that highlight shared functionality. fusionDB provides a fast means of comparing microbes, identifying potential horizontal gene transfer events, and highlighting key environment-specific functionality. PMID:29112720

  17. IMG/VR: a database of cultured and uncultured DNA Viruses and retroviruses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paez-Espino, David; Chen, I. -Min A.; Palaniappan, Krishna

    Viruses represent the most abundant life forms on the planet. Recent experimental and computational improvements have led to a dramatic increase in the number of viral genome sequences identified primarily from metagenomic samples. As a result of the expanding catalog of metagenomic viral sequences, there exists a need for a comprehensive computational platform integrating all these sequences with associated metadata and analytical tools. Here we present IMG/VR (https://img.jgi.doe.gov/vr/), the largest publicly available database of 3908 isolate reference DNA viruses with 264 413 computationally identified viral contigs from > 6000 ecologically diverse metagenomic samples. Approximately half of the viral contigs aremore » grouped into genetically distinct quasi-species clusters. Microbial hosts are predicted for 20 000 viral sequences, revealing nine microbial phyla previously unreported to be infected by viruses. Viral sequences can be queried using a variety of associated metadata, including habitat type and geographic location of the samples, or taxonomic classification according to hallmark viral genes. IMG/VR has a user-friendly interface that allows users to interrogate all integrated data and interact by comparingwith external sequences, thus serving as an essential resource in the viral genomics community.« less

  18. Open Access to Geophysical Data

    NASA Astrophysics Data System (ADS)

    Sergeyeva, Nataliya A.; Zabarinskaya, Ludmila P.

    2017-04-01

    Russian World Data Centers for Solar-Terrestrial Physics & Solid Earth Physics hosted by the Geophysical Center of the Russian Academy of Sciences are the Regular Members of the ICSU-World Data System. Guided by the principles of the WDS Constitution and WDS Data Sharing Principles, the WDCs provide full and open access to data, long-term data stewardship, compliance with agreed-upon data standards and conventions, and mechanisms to facilitate and improve access to data. Historical and current geophysical data on different media, in the form of digital data sets, analog records, collections of maps, descriptions are stored and collected in the Centers. The WDCs regularly fill up repositories and database with new data, support them up to date. Now the WDCs focus on four new projects, aimed at increase of data available in network by retrospective data collection and digital preservation of data; creation of a modern system of registration and publication of data with digital object identifier (DOI) assignment, and promotion of data citation culture; creation of databases instead of file system for more convenient access to data; participation in the WDS Metadata Catalogue and Data Portal by creating of metadata for information resources of WDCs.

  19. The role of digital sample information within the digital geoscience infrastructure: a pragmatic approach

    NASA Astrophysics Data System (ADS)

    Howe, Michael

    2014-05-01

    Much of the digital geological information on the composition, properties and dynamics of the subsurface is based ultimately on physical samples, many of which are archived to provide a basis for the information. Online metadata catalogues of these collections have now been available for many years. Many of these are institutional and tightly focussed, with UK examples including the British Geological Survey's (BGS) palaeontological samples database, PalaeoSaurus (http://www.bgs.ac.uk/palaeosaurus/), and mineralogical and petrological sample database, Britrocks (http://www.bgs.ac.uk/data/britrocks.html) . There are now a growing number of international sample metadata databases, including The Palaeobiology Database (http://paleobiodb.org/) and SESAR, the IGSN (International Geo Sample Number) database (http://www.geosamples.org/catalogsearch/ ). More recently the emphasis has moved beyond metadata (locality, identification, age, citations, etc) to digital imagery, with the intention of providing the user with at least enough information to determine whether viewing the sample would be worthwhile. Recent BGS examples include high resolution (e.g. 7216 x 5412 pixel) hydrocarbon well core images (http://www.bgs.ac.uk/data/offshoreWells/wells.cfc?method=searchWells) , high resolution rock thin section images (e.g. http://www.largeimages.bgs.ac.uk/iip/britrocks.html?id=290000/291739 ) and building stone images (http://geoscenic.bgs.ac.uk/asset-bank/action/browseItems?categoryId=1547&categoryTypeId=1) . This has been developed further with high resolution stereo images. The Jisc funded GB3D type fossils online project delivers these as red-cyan anaglyphs (http://www.3d-fossils.ac.uk/). More innovatively, the GB3D type fossils project has laser scanned several thousand type fossils and the resulting 3d-digital models are now being delivered through the online portal. Importantly, this project also represents collaboration between the BGS, Oxford and Cambridge Universities, the National Museums of Wales, and numerous other national, local and regional museums. The lack of currently accepted international standards and infrastructures for the delivery of high resolution images and 3d-digital models has necessitated the BGS in developing or selecting its own. Most high resolution images have been delivered using the JPEG 2000 format because of its quality and speed. Digital models have been made available in both .PLY and .OBJ format because of their respective efficient file size, and flexibility. Consideration must now be given to European and international standards and infrastructures for the delivery of high resolution images and 3d-digital models.

  20. Current Development at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.; Clayton, R. W.

    2005-12-01

    Over the past year, the SCEDC completed or is near completion of three featured projects: Station Information System (SIS) Development: The SIS will provide users with an interface into complete and accurate station metadata for all current and historic data at the SCEDC. The goal of this project is to develop a system that can interact with a single database source to enter, update and retrieve station metadata easily and efficiently. The system will provide accurate station/channel information for active stations to the SCSN real-time processing system, as will as station/channel information for stations that have parametric data at the SCEDC i.e., for users retrieving data via STP. Additionally, the SIS will supply information required to generate dataless SEED and COSMOS V0 volumes and allow stations to be added to the system with a minimum, but incomplete set of information using predefined defaults that can be easily updated as more information becomes available. Finally, the system will facilitate statewide metadata exchange for both real-time processing and provide a common approach to CISN historic station metadata. Moment Tensor Solutions: The SCEDC is currently archiving and delivering Moment Magnitudes and Moment Tensor Solutions (MTS) produced by the SCSN in real-time and post-processing solutions for events spanning back to 1999. The automatic MTS runs on all local events with magnitudes > 3.0, and all regional events > 3.5. The distributed solution automatically creates links from all USGS Simpson Maps to a text e-mail summary solution, creates a .gif image of the solution, and updates the moment tensor database tables at the SCEDC. Searchable Scanned Waveforms Site: The Caltech Seismological Lab has made available 12,223 scanned images of pre-digital analog recordings of major earthquakes recorded in Southern California between 1962 and 1992 at http://www.data.scec.org/research/scans/. The SCEDC has developed a searchable web interface that allows users to search the available files, select multiple files for download and then retrieve a zipped file containing the results. Scanned images of paper records for M>3.5 southern California earthquakes and several significant teleseisms are available for download via the SCEDC through this search tool.

  1. Designing a Multi-Petabyte Database for LSST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, Jacek; Hanushevsky, Andrew; Nikolaev, Sergei

    2007-01-10

    The 3.2 giga-pixel LSST camera will produce approximately half a petabyte of archive images every month. These data need to be reduced in under a minute to produce real-time transient alerts, and then added to the cumulative catalog for further analysis. The catalog is expected to grow about three hundred terabytes per year. The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on a database for catalogs and metadata. Several database systems are beingmore » evaluated to understand how they perform at these data rates, data volumes, and access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, results to date from evaluating available database technologies against LSST requirements, and the proposed database architecture to meet the data challenges.« less

  2. The IAGOS Information System

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Thouret, Valérie; Brissebrat, Guillaume

    2017-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft and do measurements of aerosols, cloud particles, greenhouse gases, ozone, water vapor and nitrogen oxides from the surface to the lower stratosphere. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Data Portal http://www.iagos.org, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). In 2016 the new IAGOS Data Portal has been released. In addition to the data download the portal provides improved and new services such as download in NetCDF or NASA Ames formats and plotting tools (maps, time series, vertical profiles, etc.). New added value products are or will be soon available through the portal: back trajectories, origin of air masses, co-location with satellite data, etc. Web services allow to download IAGOS metadata such as flights and airports information. Administration tools have been implemented for users management and instruments monitoring. A major improvement is the interoperability with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse, the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de) and the CAMS (Copernicus Atmosphere Monitoring Service) data center in Jülich (http://join.iek.fz-juelich.de). The link with the CAMS data center, through the JOIN interface, allows to combine model outputs with IAGOS data for inter-comparison. The CAMS project is a prominent user of the IGAS data network. During the year IAGOS will improved metadata standardization and dissemination through different collaborations with the AERIS data center, GAW for which IAGOS is a contributing network and the ENVRI+ European project. Metadata about measurements traceability and quality will be available, DOI will be implemented and interoperability with other European Infrastructures will be set up through standardized web services.

  3. Metadata for selecting or submitting generic seismic vulnerability functions via GEM's vulnerability database

    USGS Publications Warehouse

    Jaiswal, Kishor

    2013-01-01

    This memo lays out a procedure for the GEM software to offer an available vulnerability function for any acceptable set of attributes that the user specifies for a particular building category. The memo also provides general guidelines on how to submit the vulnerability or fragility functions to the GEM vulnerability repository, stipulating which attributes modelers must provide so that their vulnerability or fragility functions can be queried appropriately by the vulnerability database. An important objective is to provide users guidance on limitations and applicability by providing the associated modeling assumptions and applicability of each vulnerability or fragility function.

  4. NPL-PAD (National Priorities List Publication Assistance Database) for Region 7

    EPA Pesticide Factsheets

    THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the National Priorities List (NPL) Publication Assistance Databsae (PAD), a Lotus Notes application that holds Region 7's universe of NPL site information such as site description, threats and contaminants, cleanup approach, environmental process, community involvement, site repository, and regional contacts. This database used to be updated annually, at different times for different NPLs, but it is currently no longer being used. This work fell under objectives for EPA's 2003-2008 Strategic Plan (Goal 3) for Land Preservation & Restoration, which are to clean up and reuse contaminated land.

  5. Redesigning the DOE Data Explorer to embed dataset relationships at the point of search and to reflect landing page organization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Studwell, Sara; Robinson, Carly; Elliott, Jannean

    Scientific research is producing ever-increasing amounts of data. Organizing and reflecting relationships across data collections, datasets, publications, and other research objects are essential functionalities of the modern science environment, yet challenging to implement. Landing pages are often used for providing ‘big picture’ contextual frameworks for datasets and data collections, and many large-volume data holders are utilizing them in thoughtful, creative ways. The benefits of their organizational efforts, however, are not realized unless the user eventually sees the landing page at the end point of their search. What if that organization and ‘big picture’ context could benefit the user at themore » beginning of the search? That is a challenging approach, but The Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) is redesigning the database functionality of the DOE Data Explorer (DDE) with that goal in mind. Phase I is focused on redesigning the DDE database to leverage relationships between two existing distinct populations in DDE, data Projects and individual Datasets, and then adding a third intermediate population, data Collections. Mapped, structured linkages, designed to show user relationships, will allow users to make informed search choices. These linkages will be sustainable and scalable, created automatically with the use of new metadata fields and existing authorities. Phase II will study selected DOE Data ID Service clients, analyzing how their landing pages are organized, and how that organization might be used to improve DDE search capabilities. At the heart of both phases is the realization that adding more metadata information for cross-referencing may require additional effort for data scientists. Finally, OSTI’s approach seeks to leverage existing metadata and landing page intelligence without imposing an additional burden on the data creators.« less

  6. A public database of macromolecular diffraction experiments.

    PubMed

    Grabowski, Marek; Langner, Karol M; Cymborowski, Marcin; Porebski, Przemyslaw J; Sroka, Piotr; Zheng, Heping; Cooper, David R; Zimmerman, Matthew D; Elsliger, Marc André; Burley, Stephen K; Minor, Wladek

    2016-11-01

    The low reproducibility of published experimental results in many scientific disciplines has recently garnered negative attention in scientific journals and the general media. Public transparency, including the availability of `raw' experimental data, will help to address growing concerns regarding scientific integrity. Macromolecular X-ray crystallography has led the way in requiring the public dissemination of atomic coordinates and a wealth of experimental data, making the field one of the most reproducible in the biological sciences. However, there remains no mandate for public disclosure of the original diffraction data. The Integrated Resource for Reproducibility in Macromolecular Crystallography (IRRMC) has been developed to archive raw data from diffraction experiments and, equally importantly, to provide related metadata. Currently, the database of our resource contains data from 2920 macromolecular diffraction experiments (5767 data sets), accounting for around 3% of all depositions in the Protein Data Bank (PDB), with their corresponding partially curated metadata. IRRMC utilizes distributed storage implemented using a federated architecture of many independent storage servers, which provides both scalability and sustainability. The resource, which is accessible via the web portal at http://www.proteindiffraction.org, can be searched using various criteria. All data are available for unrestricted access and download. The resource serves as a proof of concept and demonstrates the feasibility of archiving raw diffraction data and associated metadata from X-ray crystallographic studies of biological macromolecules. The goal is to expand this resource and include data sets that failed to yield X-ray structures in order to facilitate collaborative efforts that will improve protein structure-determination methods and to ensure the availability of `orphan' data left behind for various reasons by individual investigators and/or extinct structural genomics projects.

  7. Redesigning the DOE Data Explorer to embed dataset relationships at the point of search and to reflect landing page organization

    DOE PAGES

    Studwell, Sara; Robinson, Carly; Elliott, Jannean

    2017-04-04

    Scientific research is producing ever-increasing amounts of data. Organizing and reflecting relationships across data collections, datasets, publications, and other research objects are essential functionalities of the modern science environment, yet challenging to implement. Landing pages are often used for providing ‘big picture’ contextual frameworks for datasets and data collections, and many large-volume data holders are utilizing them in thoughtful, creative ways. The benefits of their organizational efforts, however, are not realized unless the user eventually sees the landing page at the end point of their search. What if that organization and ‘big picture’ context could benefit the user at themore » beginning of the search? That is a challenging approach, but The Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) is redesigning the database functionality of the DOE Data Explorer (DDE) with that goal in mind. Phase I is focused on redesigning the DDE database to leverage relationships between two existing distinct populations in DDE, data Projects and individual Datasets, and then adding a third intermediate population, data Collections. Mapped, structured linkages, designed to show user relationships, will allow users to make informed search choices. These linkages will be sustainable and scalable, created automatically with the use of new metadata fields and existing authorities. Phase II will study selected DOE Data ID Service clients, analyzing how their landing pages are organized, and how that organization might be used to improve DDE search capabilities. At the heart of both phases is the realization that adding more metadata information for cross-referencing may require additional effort for data scientists. Finally, OSTI’s approach seeks to leverage existing metadata and landing page intelligence without imposing an additional burden on the data creators.« less

  8. Event selection services in ATLAS

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Cuhadar-Donszelmann, T.; Gallas, E.; Hrivnac, J.; Kenyon, M.; McGlone, H.; Malon, D.; Mambelli, M.; Nowak, M.; Viegas, F.; Vinek, E.; Zhang, Q.

    2010-04-01

    ATLAS has developed and deployed event-level selection services based upon event metadata records ("TAGS") and supporting file and database technology. These services allow physicists to extract events that satisfy their selection predicates from any stage of data processing and use them as input to later analyses. One component of these services is a web-based Event-Level Selection Service Interface (ELSSI). ELSSI supports event selection by integrating run-level metadata, luminosity-block-level metadata (e.g., detector status and quality information), and event-by-event information (e.g., triggers passed and physics content). The list of events that survive after some selection criterion is returned in a form that can be used directly as input to local or distributed analysis; indeed, it is possible to submit a skimming job directly from the ELSSI interface using grid proxy credential delegation. ELSSI allows physicists to explore ATLAS event metadata as a means to understand, qualitatively and quantitatively, the distributional characteristics of ATLAS data. In fact, the ELSSI service provides an easy interface to see the highest missing ET events or the events with the most leptons, to count how many events passed a given set of triggers, or to find events that failed a given trigger but nonetheless look relevant to an analysis based upon the results of offline reconstruction, and more. This work provides an overview of ATLAS event-level selection services, with an emphasis upon the interactive Event-Level Selection Service Interface.

  9. New Version of SeismicHandler (SHX) based on ObsPy

    NASA Astrophysics Data System (ADS)

    Stammler, Klaus; Walther, Marcus

    2016-04-01

    The command line version of SeismicHandler (SH), a scientific analysis tool for seismic waveform data developed around 1990, has been redesigned in the recent years, based on a project funded by the Deutsche Forschungsgemeinschaft (DFG). The aim was to address new data access techniques, simplified metadata handling and a modularized software design. As a result the program was rewritten in Python in its main parts, taking advantage of simplicity of this script language and its variety of well developed software libraries, including ObsPy. SHX provides an easy access to waveforms and metadata via arclink and FDSN webservice protocols, also access to event catalogs is implemented. With single commands whole networks or stations within a certain area may be read in, the metadata are retrieved from the servers and stored in a local database. For data processing the large set of SH commands is available, as well as the SH scripting language. Via this SH language scripts or additional Python modules the command set of SHX is easily extendable. The program is open source, tested on Linux operating systems, documentation and download is found at URL "https://www.seismic-handler.org/".

  10. EMERALD: Coping with the Explosion of Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2009-12-01

    The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.

  11. GEMINI: a computationally-efficient search engine for large gene expression datasets.

    PubMed

    DeFreitas, Timothy; Saddiki, Hachem; Flaherty, Patrick

    2016-02-24

    Low-cost DNA sequencing allows organizations to accumulate massive amounts of genomic data and use that data to answer a diverse range of research questions. Presently, users must search for relevant genomic data using a keyword, accession number of meta-data tag. However, in this search paradigm the form of the query - a text-based string - is mismatched with the form of the target - a genomic profile. To improve access to massive genomic data resources, we have developed a fast search engine, GEMINI, that uses a genomic profile as a query to search for similar genomic profiles. GEMINI implements a nearest-neighbor search algorithm using a vantage-point tree to store a database of n profiles and in certain circumstances achieves an [Formula: see text] expected query time in the limit. We tested GEMINI on breast and ovarian cancer gene expression data from The Cancer Genome Atlas project and show that it achieves a query time that scales as the logarithm of the number of records in practice on genomic data. In a database with 10(5) samples, GEMINI identifies the nearest neighbor in 0.05 sec compared to a brute force search time of 0.6 sec. GEMINI is a fast search engine that uses a query genomic profile to search for similar profiles in a very large genomic database. It enables users to identify similar profiles independent of sample label, data origin or other meta-data information.

  12. Modeling of Habitat and Foraging Behavior of Beaked Whales in the Southern California Bight

    DTIC Science & Technology

    2014-09-30

    preference. APPROACH High-Frequency Acoustic Recording Packages ( HARPs , Wiggins & Hildebrand 2007) have collected acoustic data at 17 sites...signal processing for HARP data is performed using the MATLAB (Mathworks, Natick, MA) based custom program Triton (Wiggins & Hildebrand 2007) and... HARP data are stored with the remainder of metadata (e.g. project name, instrument location, detection settings, detection effort) in the database

  13. The Geochemical Databases GEOROC and GeoReM - What's New?

    NASA Astrophysics Data System (ADS)

    Sarbas, B.; Jochum, K. P.; Nohl, U.; Weis, U.

    2017-12-01

    The geochemical databases GEOROC (http: georoc.mpch-mainz.gwdg.de) and GeoReM (http: georem.mpch-mainz.gwdg.de) are maintained by the Max Planck Institute for Chemistry in Mainz, Germany. Both online databases became crucial tools for geoscientists from different research areas. They are regularly upgraded by new tools and new data from recent publications obtained from a wide range of international journals. GEOROC is a collection of published analyses of volcanic rocks and mantle xenoliths. Since recently, data for plutonic rocks are added. The analyses include major and trace element concentrations, radiogenic and non-radiogenic isotope ratios as well as analytical ages for whole rocks, glasses, minerals and inclusions. Samples come from eleven geological settings and span the whole geological age scale from Archean to Recent. Metadata include, among others, geographic location, rock class and rock type, geological age, degree of alteration, analytical method, laboratory, and reference. The GEOROC web page allows selection of samples by geological setting, geography, chemical criteria, rock or sample name, and bibliographic criteria. In addition, it provides a large number of precompiled files for individual locations, minerals and rock classes. GeoReM is a database collecting information about reference materials of geological and environmental interest, such as rock powders, synthetic and natural glasses as well as mineral, isotopic, biological, river water and seawater reference materials. It contains published data and compilation values (major and trace element concentrations and mass fractions, radiogenic and stable isotope ratios). Metadata comprise, among others, uncertainty, analytical method and laboratory. Reference materials are important for calibration, method validation, quality control and to establish metrological traceability. GeoReM offers six different search strategies: samples or materials (published values), samples (GeoReM preferred values), chemical criteria, chemical criteria based on bibliography, bibliography, as well as methods and institutions.

  14. EML, VEGA, ODM, LTER, GLEON - considerations and technologies for building a buoy information system at an LTER site

    NASA Astrophysics Data System (ADS)

    Gries, C.; Winslow, L.; Shin, P.; Hanson, P. C.; Barseghian, D.

    2010-12-01

    At the North Temperate Lakes Long Term Ecological Research (NTL LTER) site six buoys and one met station are maintained, each equipped with up to 20 sensors producing up to 45 separate data streams at a 1 or 10 minute frequency. Traditionally, this data volume has been managed in many matrix type tables, each described in the Ecological Metadata Language (EML) and accessed online by a query system based on the provided metadata. To develop a more flexible information system, several technologies are currently being experimented with. We will review, compare and evaluate these technologies and discuss constraints and advantages of network memberships and implementation of standards. A Data Turbine server is employed to stream data from data logger files into a database with the Real-time Data Viewer being used for monitoring sensor health. The Kepler work flow processor is being explored to introduce quality control routines into this data stream taking advantage of the Data Turbine actor. Kepler could replace traditional database triggers while adding visualization and advanced data access functionality for downstream modeling or other analytical applications. The data are currently streamed into the traditional matrix type tables and into an Observation Data Model (ODM) following the CUAHSI ODM 1.1 specifications. In parallel these sensor data are managed within the Global Lake Ecological Observatory Network (GLEON) where the software package Ziggy streams the data into a database of the VEGA data model. Contributing data to a network implies compliance with established standards for data delivery and data documentation. ODM or VEGA type data models are not easily described in EML, the metadata exchange standard for LTER sites, but are providing many advantages from an archival standpoint. Both GLEON and CUAHSI have developed advanced data access capabilities based on their respective data models and data exchange standards while LTER is currently in a phase of intense technology developments which will eventually provide standardized data access that includes ecological data set types currently not covered by either ODM or VEGA.

  15. The Index to Marine and Lacustrine Geological Samples (IMLGS): Linking Digital Data to Physical Samples for the Marine Community

    NASA Astrophysics Data System (ADS)

    Stroker, K. J.; Jencks, J. H.; Eakins, B.

    2016-12-01

    The Index to Marine and Lacustrine Geological Samples (IMLGS) is a community designed and maintained resource enabling researchers to locate and request seafloor and lakebed geologic samples curated by partner institutions. The Index was conceived in the dawn of the digital age by representatives from U.S. academic and government marine core repositories and the NOAA National Geophysical Data Center, now the National Centers for Environmental Information (NCEI), at a 1977 meeting convened by the National Science Foundation (NSF). The Index is based on core concepts of community oversight, common vocabularies, consistent metadata and a shared interface. The Curators Consortium, international in scope, meets biennially to share ideas and discuss best practices. NCEI serves the group by providing database access and maintenance, a list server, digitizing support and long-term archival of sample metadata, data and imagery. Over three decades, participating curators have performed the laborious task of creating and contributing metadata for over 205,000 sea floor and lake-bed cores, grabs, and dredges archived in their collections. Some partners use the Index for primary web access to their collections while others use it to increase exposure of more in-depth institutional systems. The IMLGS has a persistent URL/Digital Object Identifier (DOI), as well as DOIs assigned to partner collections for citation and to provide a persistent link to curator collections. The Index is currently a geospatially-enabled relational database, publicly accessible via Web Feature and Web Map Services, and text- and ArcGIS map-based web interfaces. To provide as much knowledge as possible about each sample, the Index includes curatorial contact information and links to related data, information and images : 1) at participating institutions, 2) in the NCEI archive, and 3) through a Linked Data interface maintained by the Rolling Deck to Repository R2R. Over 43,000 International GeoSample Numbers (IGSNs) linking to the System for Earth Sample Registration (SESAR) are included in anticipation of opportunities for interconnectivity with Integrated Earth Data Applications (IEDA) systems. The paper will discuss the database with a goal to increase the connections and links to related data at partner institutions.

  16. Sagace: A web-based search engine for biomedical databases in Japan

    PubMed Central

    2012-01-01

    Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816

  17. Preliminary surficial geologic map of the Newberry Springs 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Phelps, G.A.; Bedford, D.R.; Lidke, D.J.; Miller, D.M.; Schmidt, K.M.

    2012-01-01

    The Newberry Springs 30' x 60' quadrangle is located in the central Mojave Desert of southern California. It is split approximately into northern and southern halves by I-40, with the city of Barstow at its western edge and the town of Ludlow near its eastern edge. The map area spans lat 34°30 to 35° N. to long -116 °to -117° W. and covers over 1,000 km2. We integrate the results of surficial geologic mapping conducted during 2002-2005 with compilations of previous surficial mapping and bedrock geologic mapping. Quaternary units are subdivided in detail on the map to distinguish variations in age, process of formation, pedogenesis, lithology, and spatial interdependency, whereas pre-Quaternary bedrock units are grouped into generalized assemblages that emphasize their attributes as hillslope-forming materials and sources of parent material for the Quaternary units. The spatial information in this publication is presented in two forms: a spatial database and a geologic map. The geologic map is a view (the display of an extracted subset of the database at a given time) of the spatial database; it highlights key aspects of the database and necessarily does not show all of the data contained therein. The database contains detailed information about Quaternary geologic unit composition, authorship, and notes regarding geologic units, faults, contacts, and local vegetation. The amount of information contained in the database is too large to show on a single map, so a restricted subset of the information was chosen to summarize the overall nature of the geology. Refer to the database for additional information. Accompanying the spatial data are the map documentation and spatial metadata. The map documentation (this document) describes the geologic setting and history of the Newberry Springs map sheet, summarizes the age and physical character of each map unit, and describes principal faults and folds. The Federal Geographic Data Committee (FGDC) compliant metadata provides detailed information about the digital files and file structure of the spatial data.

  18. Metadata mapping and reuse in caBIG.

    PubMed

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-02-05

    This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG framework or other frameworks that use metadata repositories. The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG framework and potentially any framework that uses a metadata repository. This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG. This effort contributes to facilitating the development of interoperable systems within caBIG as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies.

  19. The Chandra Source Catalog: Storage and Interfaces

    NASA Astrophysics Data System (ADS)

    van Stone, David; Harbo, Peter N.; Tibbetts, Michael S.; Zografou, Panagoula; Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is part of the Chandra Data Archive (CDA) at the Chandra X-ray Center. The catalog contains source properties and associated data objects such as images, spectra, and lightcurves. The source properties are stored in relational databases and the data objects are stored in files with their metadata stored in databases. The CDA supports different versions of the catalog: multiple fixed release versions and a live database version. There are several interfaces to the catalog: CSCview, a graphical interface for building and submitting queries and for retrieving data objects; a command-line interface for property and source searches using ADQL; and VO-compliant services discoverable though the VO registry. This poster describes the structure of the catalog and provides an overview of the interfaces.

  20. WebEAV: automatic metadata-driven generation of web interfaces to entity-attribute-value databases.

    PubMed

    Nadkarni, P M; Brandt, C M; Marenco, L

    2000-01-01

    The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples.

  1. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  2. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  3. XCEDE: An Extensible Schema For Biomedical Data

    PubMed Central

    Gadde, Syam; Aucoin, Nicole; Grethe, Jeffrey S.; Keator, David B.; Marcus, Daniel S.; Pieper, Steve

    2013-01-01

    The XCEDE (XML-based Clinical and Experimental Data Exchange) XML schema, developed by members of the BIRN (Biomedical Informatics Research Network), provides an extensive metadata hierarchy for storing, describing and documenting the data generated by scientific studies. Currently at version 2.0, the XCEDE schema serves as a specification for the exchange of scientific data between databases, analysis tools, and web services. It provides a structured metadata hierarchy, storing information relevant to various aspects of an experiment (project, subject, protocol, etc.). Each hierarchy level also provides for the storage of data provenance information allowing for a traceable record of processing and/or changes to the underlying data. The schema is extensible to support the needs of various data modalities and to express types of data not originally envisioned by the developers. The latest version of the XCEDE schema and manual are available from http://www.xcede.org/ PMID:21479735

  4. Methane Yield Database: Online infrastructure and bioresource for methane yield data and related metadata.

    PubMed

    Murovec, Boštjan; Kolbl, Sabina; Stres, Blaž

    2015-01-01

    The aim of this study was to develop and validate a community supported online infrastructure and bioresource for methane yield data and accompanying metadata collected from published literature. In total, 1164 entries described by 15,749 data points were assembled. Analysis of data collection showed little congruence in reporting of methodological approaches. The largest identifiable source of variation in reported methane yields was represented by authorship (i.e. substrate batches within particular substrate class) within which experimental scales (volumes (0.02-5l), incubation temperature (34-40 °C) and % VS of substrate played an important role (p < 0.05, npermutations = 999) as well. The largest fraction of variability, however, remained unaccounted for and thus unexplained (> 63%). This calls for reconsideration of accepted approaches to reporting data in currently published literature to increase capacity to service industrial decision making to a greater extent. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. The Digital Sample: Metadata, Unique Identification, and Links to Data and Publications

    NASA Astrophysics Data System (ADS)

    Lehnert, K. A.; Vinayagamoorthy, S.; Djapic, B.; Klump, J.

    2006-12-01

    A significant part of digital data in the Geosciences refers to physical samples of Earth materials, from igneous rocks to sediment cores to water or gas samples. The application and long-term utility of these sample-based data in research is critically dependent on (a) the availability of information (metadata) about the samples such as geographical location and time of sampling, or sampling method, (b) links between the different data types available for individual samples that are dispersed in the literature and in digital data repositories, and (c) access to the samples themselves. Major problems for achieving this include incomplete documentation of samples in publications, use of ambiguous sample names, and the lack of a central catalog that allows to find a sample's archiving location. The International Geo Sample Number IGSN, managed by the System for Earth Sample Registration SESAR, provides solutions for these problems. The IGSN is a unique persistent identifier for samples and other GeoObjects that can be obtained by submitting sample metadata to SESAR (www.geosamples.org). If data in a publication is referenced to an IGSN (rather than an ambiguous sample name), sample metadata can readily be extracted from the SESAR database, which evolves into a Global Sample Catalog that also allows to locate the owner or curator of the sample. Use of the IGSN in digital data systems allows building linkages between distributed data. SESAR is contributing to the development of sample metadata standards. SESAR will integrate the IGSN in persistent, resolvable identifiers based on the handle.net service to advance direct linkages between the digital representation of samples in SESAR (sample profiles) and their related data in the literature and in web-accessible digital data repositories. Technologies outlined by Klump et al. (this session) such as the automatic creation of ontologies by text mining applications will be explored for harvesting identifiers of publications and datasets that contain information about a specific sample in order to establish comprehensive data profiles for samples.

  6. Report on the Global Data Assembly Center (GDAC) to the 12th GHRSST Science Team Meeting

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward M.; Bingham, Andrew; Vazquez, Jorge; Thompson, Charles; Huang, Thomas; Finch, Chris

    2011-01-01

    In 2010/2011 the Global Data Assembly Center (GDAC) at NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC) continued its role as the primary clearinghouse and access node for operational Group for High Resolution Sea Surface Temperature (GHRSST) datastreams, as well as its collaborative role with the NOAA Long Term Stewardship and Reanalysis Facility (LTSRF) for archiving. Here we report on our data management activities and infrastructure improvements since the last science team meeting in June 2010.These include the implementation of all GHRSST datastreams in the new PO.DAAC Data Management and Archive System (DMAS) for more reliable and timely data access. GHRSST dataset metadata are now stored in a new database that has made the maintenance and quality improvement of metadata fields more straightforward. A content management system for a revised suite of PO.DAAC web pages allows dynamic access to a subset of these metadata fields for enhanced dataset description as well as discovery through a faceted search mechanism from the perspective of the user. From the discovery and metadata standpoint the GDAC has also implemented the NASA version of the OpenSearch protocol for searching for GHRSST granules and developed a web service to generate ISO 19115-2 compliant metadata records. Furthermore, the GDAC has continued to implement a new suite of tools and services for GHRSST datastreams including a Level 2 subsetter known as Dataminer, a revised POET Level 3/4 subsetter and visualization tool, a Google Earth interface to selected daily global Level 2 and Level 4 data, and experimented with a THREDDS catalog of GHRSST data collections. Finally we will summarize the expanding user and data statistics, and other metrics that we have collected over the last year demonstrating the broad user community and applications that the GHRSST project continues to serve via the GDAC distribution mechanisms. This report also serves by extension to summarize the activities of the GHRSST Data Assembly and Systems Technical Advisory Group (DAS-TAG).

  7. Adhesives: Test Method, Group Assignment, and Categorization Guide for High-Loading-Rate Applications Preparation and Testing of Single Lap Joints (Ver. 2.2, Unlimited)

    DTIC Science & Technology

    2016-04-01

    Gerard Chaney, and Charles Pergantis Weapons and Materials Research Directorate, ARL Coatings, Corrosion, and Engineered Polymers Branch (CCEPB...SUBJECT TERMS single lap joint, adhesive, sample preparation, testing, database, metadata, material pedigree, ISO 16. SECURITY CLASSIFICATION OF: 17...temperature/water immersion conditioning test for lap-joint test specimens using the test tubes and convection oven method

  8. IMG/VR: a database of cultured and uncultured DNA Viruses and retroviruses.

    PubMed

    Paez-Espino, David; Chen, I-Min A; Palaniappan, Krishna; Ratner, Anna; Chu, Ken; Szeto, Ernest; Pillay, Manoj; Huang, Jinghua; Markowitz, Victor M; Nielsen, Torben; Huntemann, Marcel; K Reddy, T B; Pavlopoulos, Georgios A; Sullivan, Matthew B; Campbell, Barbara J; Chen, Feng; McMahon, Katherine; Hallam, Steve J; Denef, Vincent; Cavicchioli, Ricardo; Caffrey, Sean M; Streit, Wolfgang R; Webster, John; Handley, Kim M; Salekdeh, Ghasem H; Tsesmetzis, Nicolas; Setubal, Joao C; Pope, Phillip B; Liu, Wen-Tso; Rivers, Adam R; Ivanova, Natalia N; Kyrpides, Nikos C

    2017-01-04

    Viruses represent the most abundant life forms on the planet. Recent experimental and computational improvements have led to a dramatic increase in the number of viral genome sequences identified primarily from metagenomic samples. As a result of the expanding catalog of metagenomic viral sequences, there exists a need for a comprehensive computational platform integrating all these sequences with associated metadata and analytical tools. Here we present IMG/VR (https://img.jgi.doe.gov/vr/), the largest publicly available database of 3908 isolate reference DNA viruses with 264 413 computationally identified viral contigs from >6000 ecologically diverse metagenomic samples. Approximately half of the viral contigs are grouped into genetically distinct quasi-species clusters. Microbial hosts are predicted for 20 000 viral sequences, revealing nine microbial phyla previously unreported to be infected by viruses. Viral sequences can be queried using a variety of associated metadata, including habitat type and geographic location of the samples, or taxonomic classification according to hallmark viral genes. IMG/VR has a user-friendly interface that allows users to interrogate all integrated data and interact by comparing with external sequences, thus serving as an essential resource in the viral genomics community. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools

    PubMed Central

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K. Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-01

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. PMID:26467476

  10. NASA's Global Change Master Directory: Discover and Access Earth Science Data Sets, Related Data Services, and Climate Diagnostics

    NASA Astrophysics Data System (ADS)

    Aleman, A.; Olsen, L. M.; Ritz, S.; Stevens, T.; Morahan, M.; Grebas, S. K.

    2011-12-01

    NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide.The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs.Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information.In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries.By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways.This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data.

  11. Lessons Learned From 104 Years of Mobile Observatories

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Clark, P. D.; Neiswender, C.; Raymond, L.; Rioux, M.; Norton, C.; Detrick, R.; Helly, J.; Sutton, D.; Weatherford, J.

    2007-12-01

    As the oceanographic community ventures into a new era of integrated observatories, it may be helpful to look back on the era of "mobile observatories" to see what Cyberinfrastructure lessons might be learned. For example, SIO has been operating research vessels for 104 years, supporting a wide range of disciplines: marine geology and geophysics, physical oceanography, geochemistry, biology, seismology, ecology, fisheries, and acoustics. In the last 6 years progress has been made with diverse data types, formats and media, resulting in a fully-searchable online SIOExplorer Digital Library of more than 800 cruises (http://SIOExplorer.ucsd.edu). Public access to SIOExplorer is considerable, with 795,351 files (206 GB) downloaded last year. During the last 3 years the efforts have been extended to WHOI, with a "Multi-Institution Testbed for Scalable Digital Archiving" funded by the Library of Congress and NSF (IIS 0455998). The project has created a prototype digital library of data from both institutions, including cruises, Alvin submersible dives, and ROVs. In the process, the team encountered technical and cultural issues that will be facing the observatory community in the near future. Technological Lessons Learned: Shipboard data from multiple institutions are extraordinarily diverse, and provide a good training ground for observatories. Data are gathered from a wide range of authorities, laboratories, servers and media, with little documentation. Conflicting versions exist, generated by alternative processes. Domain- and institution-specific issues were addressed during initial staging. Data files were categorized and metadata harvested with automated procedures. With our second-generation approach to staging, we achieve higher levels of automation with greater use of controlled vocabularies. Database and XML- based procedures deal with the diversity of raw metadata values and map them to agreed-upon standard values, in collaboration with the Marine Metadata Interoperability (MMI) community. All objects are tagged with an expert level, thus serving an educational audience, as well as research users. After staging, publication into the digital library is completely automated. The technical challenges have been largely overcome, thanks to a scalable, federated digital library architecture from the San Diego Supercomputer Center, implemented at SIO, WHOI and other sites. The metadata design is flexible, supporting modular blocks of metadata tailored to the needs of instruments, samples, documents, derived products, cruises or dives, as appropriate. Controlled metadata vocabularies, with content and definitions negotiated by all parties, are critical. Metadata may be mapped to required external standards and formats, as needed. Cultural Lessons Learned: The cultural challenges have been more formidable than expected. They became most apparent during attempts to categorize and stage digital data objects across two institutions, each with their own naming conventions and practices, generally undocumented, and evolving across decades. Whether the questions concerned data ownership, collection techniques, data diversity or institutional practices, the solution involved a joint discussion with scientists, data managers, technicians and archivists, working together. Because metadata discussions go on endlessly, significant benefit comes from dictionaries with definitions of all community-authorized metadata values.

  12. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  13. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies.

    PubMed

    Ramos, Vitor; Morais, João; Vasconcelos, Vitor M

    2017-04-25

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies.

  14. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies

    PubMed Central

    Ramos, Vitor; Morais, João; Vasconcelos, Vitor M.

    2017-01-01

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies. PMID:28440791

  15. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet. Copyright 1999 Academic Press.

  16. MOPED enables discoveries through consistently processed proteomics data

    PubMed Central

    Higdon, Roger; Stewart, Elizabeth; Stanberry, Larissa; Haynes, Winston; Choiniere, John; Montague, Elizabeth; Anderson, Nathaniel; Yandl, Gregory; Janko, Imre; Broomall, William; Fishilevich, Simon; Lancet, Doron; Kolker, Natali; Kolker, Eugene

    2014-01-01

    The Model Organism Protein Expression Database (MOPED, http://moped.proteinspire.org), is an expanding proteomics resource to enable biological and biomedical discoveries. MOPED aggregates simple, standardized and consistently processed summaries of protein expression and metadata from proteomics (mass spectrometry) experiments from human and model organisms (mouse, worm and yeast). The latest version of MOPED adds new estimates of protein abundance and concentration, as well as relative (differential) expression data. MOPED provides a new updated query interface that allows users to explore information by organism, tissue, localization, condition, experiment, or keyword. MOPED supports the Human Proteome Project’s efforts to generate chromosome and diseases specific proteomes by providing links from proteins to chromosome and disease information, as well as many complementary resources. MOPED supports a new omics metadata checklist in order to harmonize data integration, analysis and use. MOPED’s development is driven by the user community, which spans 90 countries guiding future development that will transform MOPED into a multi-omics resource. MOPED encourages users to submit data in a simple format. They can use the metadata a checklist generate a data publication for this submission. As a result, MOPED will provide even greater insights into complex biological processes and systems and enable deeper and more comprehensive biological and biomedical discoveries. PMID:24350770

  17. MeRy-B: a web knowledgebase for the storage, visualization, analysis and annotation of plant NMR metabolomic profiles

    PubMed Central

    2011-01-01

    Background Improvements in the techniques for metabolomics analyses and growing interest in metabolomic approaches are resulting in the generation of increasing numbers of metabolomic profiles. Platforms are required for profile management, as a function of experimental design, and for metabolite identification, to facilitate the mining of the corresponding data. Various databases have been created, including organism-specific knowledgebases and analytical technique-specific spectral databases. However, there is currently no platform meeting the requirements for both profile management and metabolite identification for nuclear magnetic resonance (NMR) experiments. Description MeRy-B, the first platform for plant 1H-NMR metabolomic profiles, is designed (i) to provide a knowledgebase of curated plant profiles and metabolites obtained by NMR, together with the corresponding experimental and analytical metadata, (ii) for queries and visualization of the data, (iii) to discriminate between profiles with spectrum visualization tools and statistical analysis, (iv) to facilitate compound identification. It contains lists of plant metabolites and unknown compounds, with information about experimental conditions, the factors studied and metabolite concentrations for several plant species, compiled from more than one thousand annotated NMR profiles for various organs or tissues. Conclusion MeRy-B manages all the data generated by NMR-based plant metabolomics experiments, from description of the biological source to identification of the metabolites and determinations of their concentrations. It is the first database allowing the display and overlay of NMR metabolomic profiles selected through queries on data or metadata. MeRy-B is available from http://www.cbib.u-bordeaux2.fr/MERYB/index.php. PMID:21668943

  18. Similarity analysis of spectra obtained via reflectance spectrometry in legal medicine.

    PubMed

    Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael

    2014-02-01

    In the present study, a series of reflectance spectra of postmortem lividity, pallor, and putrefaction-affected skin for 195 investigated cases in the course of cooling down the corpse has been collected. The reflectance spectrometric measurements were stored together with their respective metadata in a MySQL database. The latter has been managed via a scientific information repository. We propose similarity measures and a criterion of similarity that capture similar spectra recorded at corpse skin. We systematically clustered reflectance spectra from the database as well as their metadata, such as case number, age, sex, skin temperature, duration of cooling, and postmortem time, with respect to the given criterion of similarity. Altogether, more than 500 reflectance spectra have been pairwisely compared. The measures that have been used to compare a pair of reflectance curve samples include the Euclidean distance between curves and the Euclidean distance between derivatives of the functions represented by the reflectance curves at the same wavelengths in the spectral range of visible light between 380 and 750 nm. For each case, using the recorded reflectance curves and the similarity criterion, the postmortem time interval during which a characteristic change in the shape of reflectance spectrum takes place is estimated. The latter is carried out via a software package composed of Java, Python, and MatLab scripts that query the MySQL database. We show that in legal medicine, matching and clustering of reflectance curves obtained by means of reflectance spectrometry with respect to a given criterion of similarity can be used to estimate the postmortem interval.

  19. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Merticariu, Vlad; Baumann, Peter

    2017-04-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well. This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics. We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  20. Agile Datacube Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Baumann, P.

    2016-12-01

    Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well.This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics.We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.

  1. Data Management Rubric for Video Data in Organismal Biology.

    PubMed

    Brainerd, Elizabeth L; Blob, Richard W; Hedrick, Tyson L; Creamer, Andrew T; Müller, Ulrike K

    2017-07-01

    Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, "Establishing Standards for Video Data Management," at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5-9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology.

  2. Data Management Rubric for Video Data in Organismal Biology

    PubMed Central

    Brainerd, Elizabeth L.; Blob, Richard W.; Hedrick, Tyson L.; Creamer, Andrew T.; Müller, Ulrike K.

    2017-01-01

    Synopsis Standards-based data management facilitates data preservation, discoverability, and access for effective data reuse within research groups and across communities of researchers. Data sharing requires community consensus on standards for data management, such as storage and formats for digital data preservation, metadata (i.e., contextual data about the data) that should be recorded and stored, and data access. Video imaging is a valuable tool for measuring time-varying phenotypes in organismal biology, with particular application for research in functional morphology, comparative biomechanics, and animal behavior. The raw data are the videos, but videos alone are not sufficient for scientific analysis. Nearly endless videos of animals can be found on YouTube and elsewhere on the web, but these videos have little value for scientific analysis because essential metadata such as true frame rate, spatial calibration, genus and species, weight, age, etc. of organisms, are generally unknown. We have embarked on a project to build community consensus on video data management and metadata standards for organismal biology research. We collected input from colleagues at early stages, organized an open workshop, “Establishing Standards for Video Data Management,” at the Society for Integrative and Comparative Biology meeting in January 2017, and then collected two more rounds of input on revised versions of the standards. The result we present here is a rubric consisting of nine standards for video data management, with three levels within each standard: good, better, and best practices. The nine standards are: (1) data storage; (2) video file formats; (3) metadata linkage; (4) video data and metadata access; (5) contact information and acceptable use; (6) camera settings; (7) organism(s); (8) recording conditions; and (9) subject matter/topic. The first four standards address data preservation and interoperability for sharing, whereas standards 5–9 establish minimum metadata standards for organismal biology video, and suggest additional metadata that may be useful for some studies. This rubric was developed with substantial input from researchers and students, but still should be viewed as a living document that should be further refined and updated as technology and research practices change. The audience for these standards includes researchers, journals, and granting agencies, and also the developers and curators of databases that may contribute to video data sharing efforts. We offer this project as an example of building community consensus for data management, preservation, and sharing standards, which may be useful for future efforts by the organismal biology research community. PMID:28881939

  3. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  4. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  5. The Index to Marine and Lacustrine Geological Samples: Improving Sample Accessibility and Enabling Current and Future Research

    NASA Astrophysics Data System (ADS)

    Moore, C.

    2011-12-01

    The Index to Marine and Lacustrine Geological Samples is a community designed and maintained resource enabling researchers to locate and request sea floor and lakebed geologic samples archived by partner institutions. Conceived in the dawn of the digital age by representatives from U.S. academic and government marine core repositories and the NOAA National Geophysical Data Center (NGDC) at a 1977 meeting convened by the National Science Foundation (NSF), the Index is based on core concepts of community oversight, common vocabularies, consistent metadata and a shared interface. Form and content of underlying vocabularies and metadata continue to evolve according to the needs of the community, as do supporting technologies and access methodologies. The Curators Consortium, now international in scope, meets at partner institutions biennially to share ideas and discuss best practices. NGDC serves the group by providing database access and maintenance, a list server, digitizing support and long-term archival of sample metadata, data and imagery. Over three decades, participating curators have performed the herculean task of creating and contributing metadata for over 195,000 sea floor and lakebed cores, grabs, and dredges archived in their collections. Some partners use the Index for primary web access to their collections while others use it to increase exposure of more in-depth institutional systems. The Index is currently a geospatially-enabled relational database, publicly accessible via Web Feature and Web Map Services, and text- and ArcGIS map-based web interfaces. To provide as much knowledge as possible about each sample, the Index includes curatorial contact information and links to related data, information and images; 1) at participating institutions, 2) in the NGDC archive, and 3) at sites such as the Rolling Deck to Repository (R2R) and the System for Earth Sample Registration (SESAR). Over 34,000 International GeoSample Numbers (IGSNs) linking to SESAR are included in anticipation of opportunities for interconnectivity with Integrated Earth Data Applications (IEDA) systems. To promote interoperability and broaden exposure via the semantic web, NGDC is publishing lithologic classification schemes and terminology used in the Index as Simple Knowledge Organization System (SKOS) vocabularies, coordinating with R2R and the Consortium for Ocean Leadership for consistency. Availability in SKOS form will also facilitate use of the vocabularies in International Standards Organization (ISO) 19115-2 compliant metadata records. NGDC provides stewardship for the Index on behalf of U.S. repositories as the NSF designated "appropriate National Data Center" for data and metadata pertaining to sea floor samples as specified in the 2011 Division of Ocean Sciences Sample and Data Policy, and on behalf of international partners via a collocated World Data Center. NGDC operates on the Open Archival Information System (OAIS) reference model. Active Partners: Antarctic Marine Geology Research Facility, Florida State University; British Ocean Sediment Core Research Facility; Geological Survey of Canada; Integrated Ocean Drilling Program; Lamont-Doherty Earth Observatory; National Lacustrine Core Repository, University of Minnesota; Oregon State University; Scripps Institution of Oceanography; University of Rhode Island; U.S. Geological Survey; Woods Hole Oceanographic Institution.

  6. EMERALD: A Flexible Framework for Managing Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2010-12-01

    The seismological community is challenged by the vast quantity of new broadband seismic data provided by large-scale seismic arrays such as EarthScope’s USArray. While this bonanza of new data enables transformative scientific studies of the Earth’s interior, it also illuminates limitations in the methods used to prepare and preprocess those data. At a recent seismic data processing focus group workshop, many participants expressed the need for better systems to minimize the time and tedium spent on data preparation in order to increase the efficiency of scientific research. Another challenge related to data from all large-scale transportable seismic experiments is that there currently exists no system for discovering and tracking changes in station metadata. This critical information, such as station location, sensor orientation, instrument response, and clock timing data, may change over the life of an experiment and/or be subject to post-experiment correction. Yet nearly all researchers utilize metadata acquired with the downloaded data, even though subsequent metadata updates might alter or invalidate results produced with older metadata. A third long-standing issue for the seismic community is the lack of easily exchangeable seismic processing codes. This problem stems directly from the storage of seismic data as individual time series files, and the history of each researcher developing his or her preferred data file naming convention and directory organization. Because most processing codes rely on the underlying data organization structure, such codes are not easily exchanged between investigators. To address these issues, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The goal of the EMERALD project is to provide seismic researchers with a unified, user-friendly, extensible system for managing seismic event data, thereby increasing the efficiency of scientific enquiry. EMERALD stores seismic data and metadata in a state-of-the-art open source relational database (PostgreSQL), and can, on a timed basis or on demand, download the most recent metadata, compare it with previously acquired values, and alert the user to changes. The backend relational database is capable of easily storing and managing many millions of records. The extensible, plug-in architecture of the EMERALD system allows any researcher to contribute new visualization and processing methods written in any of 12 programming languages, and a central Internet-enabled repository for such methods provides users with the opportunity to download, use, and modify new processing methods on demand. EMERALD includes data acquisition tools allowing direct importation of seismic data, and also imports data from a number of existing seismic file formats. Pre-processed clean sets of data can be exported as standard sac files with user-defined file naming and directory organization, for use with existing processing codes. The EMERALD system incorporates existing acquisition and processing tools, including SOD, TauP, GMT, and FISSURES/DHI, making much of the functionality of those tools available in a unified system with a user-friendly web browser interface. EMERALD is now in beta test. See emerald.asu.edu or contact john.d.west@asu.edu for more details.

  7. LVFS: A Scalable Petabye/Exabyte Data Storage System

    NASA Astrophysics Data System (ADS)

    Golpayegani, N.; Halem, M.; Masuoka, E. J.; Ye, G.; Devine, N. K.

    2013-12-01

    Managing petabytes of data with hundreds of millions of files is the first step necessary towards an effective big data computing and collaboration environment in a distributed system. We describe here the MODAPS LAADS Virtual File System (LVFS), a new storage architecture which replaces the previous MODAPS operational Level 1 Land Atmosphere Archive Distribution System (LAADS) NFS based approach to storing and distributing datasets from several instruments, such as MODIS, MERIS, and VIIRS. LAADS is responsible for the distribution of over 4 petabytes of data and over 300 million files across more than 500 disks. We present here the first LVFS big data comparative performance results and new capabilities not previously possible with the LAADS system. We consider two aspects in addressing inefficiencies of massive scales of data. First, is dealing in a reliable and resilient manner with the volume and quantity of files in such a dataset, and, second, minimizing the discovery and lookup times for accessing files in such large datasets. There are several popular file systems that successfully deal with the first aspect of the problem. Their solution, in general, is through distribution, replication, and parallelism of the storage architecture. The Hadoop Distributed File System (HDFS), Parallel Virtual File System (PVFS), and Lustre are examples of such file systems that deal with petabyte data volumes. The second aspect deals with data discovery among billions of files, the largest bottleneck in reducing access time. The metadata of a file, generally represented in a directory layout, is stored in ways that are not readily scalable. This is true for HDFS, PVFS, and Lustre as well. Recent experimental file systems, such as Spyglass or Pantheon, have attempted to address this problem through redesign of the metadata directory architecture. LVFS takes a radically different architectural approach by eliminating the need for a separate directory within the file system. The LVFS system replaces the NFS disk mounting approach of LAADS and utilizes the already existing highly optimized metadata database server, which is applicable to most scientific big data intensive compute systems. Thus, LVFS ties the existing storage system with the existing metadata infrastructure system which we believe leads to a scalable exabyte virtual file system. The uniqueness of the implemented design is not limited to LAADS but can be employed with most scientific data processing systems. By utilizing the Filesystem In Userspace (FUSE), a kernel module available in many operating systems, LVFS was able to replace the NFS system while staying POSIX compliant. As a result, the LVFS system becomes scalable to exabyte sizes owing to the use of highly scalable database servers optimized for metadata storage. The flexibility of the LVFS design allows it to organize data on the fly in different ways, such as by region, date, instrument or product without the need for duplication, symbolic links, or any other replication methods. We proposed here a strategic reference architecture that addresses the inefficiencies of scientific petabyte/exabyte file system access through the dynamic integration of the observing system's large metadata file.

  8. A Taxonomic Search Engine: Federating taxonomic databases using web services

    PubMed Central

    Page, Roderic DM

    2005-01-01

    Background The taxonomic name of an organism is a key link between different databases that store information on that organism. However, in the absence of a single, comprehensive database of organism names, individual databases lack an easy means of checking the correctness of a name. Furthermore, the same organism may have more than one name, and the same name may apply to more than one organism. Results The Taxonomic Search Engine (TSE) is a web application written in PHP that queries multiple taxonomic databases (ITIS, Index Fungorum, IPNI, NCBI, and uBIO) and summarises the results in a consistent format. It supports "drill-down" queries to retrieve a specific record. The TSE can optionally suggest alternative spellings the user can try. It also acts as a Life Science Identifier (LSID) authority for the source taxonomic databases, providing globally unique identifiers (and associated metadata) for each name. Conclusion The Taxonomic Search Engine is available at and provides a simple demonstration of the potential of the federated approach to providing access to taxonomic names. PMID:15757517

  9. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    NASA Technical Reports Server (NTRS)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  10. Joint Battlespace Infosphere: Information Management Within a C2 Enterprise

    DTIC Science & Technology

    2005-06-01

    using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing

  11. The Global War on Terrorism: Analytical Support, Tools and Metrics of Assessment. MORS Workshop

    DTIC Science & Technology

    2005-08-11

    is the matter of intelligence, as COL(P) Keller pointed out, we need to spend less time in the intelligence cycle on managing information and...models, decision aids: "named things " * Methodologies: potentially useful things "* Resources: databases, people, books? * Meta-data on tools * Develop a...experience. Only one member (Mr. Garry Greco) had served on the Joint Intelligence Task Force for Counter Terrorism. Although Gary heavily participated

  12. If we build it, will they come? Curation and use of the ESO telescope bibliography

    NASA Astrophysics Data System (ADS)

    Grothkopf, Uta; Meakins, Silvia; Bordelon, Dominic

    2015-12-01

    The ESO Telescope Bibliography (telbib) is a database of refereed papers published by the ESO users community. It links data in the ESO Science Archive with the published literature, and vice versa. Developed and maintained by the ESO library, telbib also provides insights into the organization's research output and impact as measured through bibliometric studies. Curating telbib is a multi-step process that involves extensive tagging of the database records. Based on selected use cases, this talk will explain how the rich metadata provide parameters for reports and statistics in order to investigate the performance of ESO's facilities and to understand trends and developments in the publishing behaviour of the user community.

  13. AGM: A DSL for mobile cloud computing based on directed graph

    NASA Astrophysics Data System (ADS)

    Tanković, Nikola; Grbac, Tihana Galinac

    2016-06-01

    This paper summarizes a novel approach for consuming a domain specific language (DSL) by transforming it to a directed graph representation persisted by a graph database. Using such specialized database enables advanced navigation trough the stored model exposing only relevant subsets of meta-data to different involved services and components. We applied this approach in a mobile cloud computing system and used it to model several mobile applications in retail, supply chain management and merchandising domain. These application are distributed in a Software-as-a-Service (SaaS) fashion and used by thousands of customers in Croatia. We report on lessons learned and propose further research on this topic.

  14. Building a Digital Library for Multibeam Data, Images and Documents

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Staudigel, H.; Koppers, A.; Johnson, C.; Cande, S.; Sandwell, D.; Peckman, U.; Becker, J. J.; Helly, J.; Zaslavsky, I.; Schottlaender, B. E.; Starr, S.; Montoya, G.

    2001-12-01

    The Scripps Institution of Oceanography, the UCSD Libraries and the San Diego Supercomputing Center have joined forces to establish a digital library for accessing a wide range of multibeam and marine geophysical data, to a community that ranges from the MGG researcher to K-12 outreach clients. This digital library collection will include 233 multibeam cruises with grids, plots, photographs, station data, technical reports, planning documents and publications, drawn from the holdings of the Geological Data Center and the SIO Archives. Inquiries will be made through an Ocean Exploration Console, reminiscent of a cockpit display where a multitude of data may be displayed individually or in two or three-dimensional projections. These displays will provide access to cruise data as well as global databases such as Global Topography, crustal age, and sediment thickness, thus meeting the day-to-day needs of researchers as well as educators, students, and the public. The prototype contains a few selected expeditions, and a review of the initial approach will be solicited from the user community during the poster session. The search process can be focused by a variety of constraints: geospatial (lat-lon box), temporal (e.g., since 1996), keyword (e.g., cruise, place name, PI, etc.), or expert-level (e.g., K-6 or researcher). The Storage Resource Broker (SRB) software from the SDSC manages the evolving collection as a series of distributed but related archives in various media, from shipboard data through processing and final archiving. The latest version of MB-System provides for the systematic creation of standard metadata, and for the harvesting of metadata from multibeam files. Automated scripts will be used to load the metadata catalog to enable queries with an Oracle database management system. These new efforts to bridge the gap between libraries and data archives are supported by the NSF Information Technology and National Science Digital Library (NSDL) programs, augmented by UC funds, and closely coordinated with Digital Library for Earth System Education (DLESE) activities.

  15. Automating Data Submission to a National Archive

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office

    2010-12-01

    In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.

  16. Metadata mapping and reuse in caBIG™

    PubMed Central

    Kunz, Isaac; Lin, Ming-Chin; Frey, Lewis

    2009-01-01

    Background This paper proposes that interoperability across biomedical databases can be improved by utilizing a repository of Common Data Elements (CDEs), UML model class-attributes and simple lexical algorithms to facilitate the building domain models. This is examined in the context of an existing system, the National Cancer Institute (NCI)'s cancer Biomedical Informatics Grid (caBIG™). The goal is to demonstrate the deployment of open source tools that can be used to effectively map models and enable the reuse of existing information objects and CDEs in the development of new models for translational research applications. This effort is intended to help developers reuse appropriate CDEs to enable interoperability of their systems when developing within the caBIG™ framework or other frameworks that use metadata repositories. Results The Dice (di-grams) and Dynamic algorithms are compared and both algorithms have similar performance matching UML model class-attributes to CDE class object-property pairs. With algorithms used, the baselines for automatically finding the matches are reasonable for the data models examined. It suggests that automatic mapping of UML models and CDEs is feasible within the caBIG™ framework and potentially any framework that uses a metadata repository. Conclusion This work opens up the possibility of using mapping algorithms to reduce cost and time required to map local data models to a reference data model such as those used within caBIG™. This effort contributes to facilitating the development of interoperable systems within caBIG™ as well as other metadata frameworks. Such efforts are critical to address the need to develop systems to handle enormous amounts of diverse data that can be leveraged from new biomedical methodologies. PMID:19208192

  17. DAS: A Data Management System for Instrument Tests and Operations

    NASA Astrophysics Data System (ADS)

    Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.

    2014-05-01

    The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.

  18. A general concept for consistent documentation of computational analyses

    PubMed Central

    Müller, Fabian; Nordström, Karl; Lengauer, Thomas; Schulz, Marcel H.

    2015-01-01

    The ever-growing amount of data in the field of life sciences demands standardized ways of high-throughput computational analysis. This standardization requires a thorough documentation of each step in the computational analysis to enable researchers to understand and reproduce the results. However, due to the heterogeneity in software setups and the high rate of change during tool development, reproducibility is hard to achieve. One reason is that there is no common agreement in the research community on how to document computational studies. In many cases, simple flat files or other unstructured text documents are provided by researchers as documentation, which are often missing software dependencies, versions and sufficient documentation to understand the workflow and parameter settings. As a solution we suggest a simple and modest approach for documenting and verifying computational analysis pipelines. We propose a two-part scheme that defines a computational analysis using a Process and an Analysis metadata document, which jointly describe all necessary details to reproduce the results. In this design we separate the metadata specifying the process from the metadata describing an actual analysis run, thereby reducing the effort of manual documentation to an absolute minimum. Our approach is independent of a specific software environment, results in human readable XML documents that can easily be shared with other researchers and allows an automated validation to ensure consistency of the metadata. Because our approach has been designed with little to no assumptions concerning the workflow of an analysis, we expect it to be applicable in a wide range of computational research fields. Database URL: http://deep.mpi-inf.mpg.de/DAC/cmds/pub/pyvalid.zip PMID:26055099

  19. PDS, DOIs, and the Literature

    NASA Astrophysics Data System (ADS)

    Raugh, Anne; Henneken, Edwin

    The Planetary Data System (PDS) is actively involved in designing both metadata and interfaces to make the assignment of Digital Object Identifiers (DOIs) to archival data a part of the archiving process for all data creators. These DOIs will be registered through DataCite, a non-profit organization whose members are all deeply concerned with archival research data, provenance tracking through the literature, and proper acknowledgement of the various types of efforts that contribute to the creation of an archival reference data set. Making the collection of citation metadata and its ingestion into the DataCite DOI database easy - and easy to do correctly - is in the best interests of all stakeholders: the data creators; the curators; the indexing organizations like the Astrophysics Data System (ADS); and the data users. But in order to realize the promise of DOIs, there are three key issues to address: 1) How do we incorporate the metadata collection process simply and naturally into the PDS archive creation process; 2) How do we encourage journal editors to require references to previously published data with the same rigor with which they require references to previously published research and analysis; and finally, 3) How can we change the culture of academic and research employers to recognize that the effort required to prepare a PDS archival data set is a career achievement on par with contributing to a refereed article in the professional literature. Data archives and scholarly publications are the long-term return on investment that funding agencies and the science community expect in exchange for research spending. The traceability and reproducibility ensured by the integration of DOIs and their related metadata into indexing and search services is an essential part of providing and optimizing that return.

  20. Towards a semantic web of paleoclimatology

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Eshleman, J. A.

    2012-12-01

    The paleoclimate record is information-rich, yet signifiant technical barriers currently exist before it can be used to automatically answer scientific questions. Here we make the case for a universal format to structure paleoclimate data. A simple example demonstrates the scientific utility of such a self-contained way of organizing coral data and meta-data in the Matlab language. This example is generalized to a universal ontology that may form the backbone of an open-source, open-access and crowd-sourced paleoclimate database. Its key attributes are: 1. Parsability: the format is self-contained (hence machine-readable), and would therefore enable a semantic web of paleoclimate information. 2. Universality: the format is platform-independent (readable on all computer and operating systems), and language- independent (readable in major programming languages) 3. Extensibility: the format requires a minimum set of fields to appropriately define a paleoclimate record, but allows for the database to grow organically as more records are added, or - equally important - as more metadata are added to existing records. 4. Citability: The format enables the automatic citation of peer- reviewed articles as well as data citations whenever a data record is being used for analysis, making due recognition of scientific work an automatic part and foundational principle of paleoclimate data analysis. 5. Ergonomy: The format will be easy to use, update and manage. This structure is designed to enable semantic searches, and is expected to help accelerate discovery in all workflows where paleoclimate data are being used. Practical steps towards the implementation of such a system at the community level are then discussed.; Preliminary ontology describing relationships between the data and meta-data fields of the Nurhati et al. [2011] climate record. Several fields are viewed as instances of larger classes (ProxyClass,Site,Reference), which would allow computers to perform operations on all records within a specific class (e.g. if the measurement type is δ18O , or if the proxy class is 'Tree Ring Width', or if the resolution is less than 3 months, etc). All records in such a database would be bound to each other by similar links, allowing machines to automatically process any form of query involving existing information. Such a design would also allow growth, by adding records and/or additional information about each record.

  1. Metadata for Web Resources: How Metadata Works on the Web.

    ERIC Educational Resources Information Center

    Dillon, Martin

    This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…

  2. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  3. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  4. DMTB: the magnetotactic bacteria database

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Lin, W.

    2012-12-01

    Magnetotactic bacteria (MTB) are of interest in biogeomagnetism, rock magnetism, microbiology, biomineralization, and advanced magnetic materials because of their ability to synthesize highly ordered intracellular nano-sized magnetic minerals, magnetite or greigite. Great strides for MTB studies have been made in the past few decades. More than 600 articles concerning MTB have been published. These rapidly growing data are stimulating cross disciplinary studies in such field as biogeomagnetism. We have compiled the first online database for MTB, i.e., Database of Magnestotactic Bacteria (DMTB, http://database.biomnsl.com). It contains useful information of 16S rRNA gene sequences, oligonucleotides, and magnetic properties of MTB, and corresponding ecological metadata of sampling sites. The 16S rRNA gene sequences are collected from the GenBank database, while all other data are collected from the scientific literature. Rock magnetic properties for both uncultivated and cultivated MTB species are also included. In the DMTB database, data are accessible through four main interfaces: Site Sort, Phylo Sort, Oligonucleotides, and Magnetic Properties. References in each entry serve as links to specific pages within public databases. The online comprehensive DMTB will provide a very useful data resource for researchers from various disciplines, e.g., microbiology, rock magnetism and paleomagnetism, biogeomagnetism, magnetic material sciences and others.

  5. Harvesting NASA's Common Metadata Repository (CMR)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Durbin, Chris; Norton, James; Mitchell, Andrew

    2017-01-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  6. Harvesting NASA's Common Metadata Repository

    NASA Astrophysics Data System (ADS)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  7. Understanding and Capturing People’s Mobile App Privacy Preferences

    DTIC Science & Technology

    2013-10-28

    The entire apps’ metadata takes up about 500MB of storage space when stored in a MySQL database and all the binary files take approximately 300GB of...functionality that can de- compile Dalvik bytecodes to Java source code faster than other de-compilers. Given the scale of the app analysis we planned on... java libraries, such as parser, sql connectors, etc Targeted Ads 137 admob, adwhirl, greystripe… Provided by mobile behavioral ads company to

  8. Shark: SQL and Analytics with Cost-Based Query Optimization on Coarse-Grained Distributed Memory

    DTIC Science & Technology

    2014-01-13

    RDBMS and contains a database (often MySQL or Derby) with a namespace for tables, table metadata and partition information. Table data is stored in an...serialization/deserialization) Java interface implementations with corresponding object inspectors. The Hive driver controls the processing of queries, coordinat...native API, RDD operations are invoked through a functional interface similar to DryadLINQ [32] in Scala, Java or Python. For example, the Scala code for

  9. Introducing the PRIDE Archive RESTful web services.

    PubMed

    Reisinger, Florian; del-Toro, Noemi; Ternent, Tobias; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2015-07-01

    The PRIDE (PRoteomics IDEntifications) database is one of the world-leading public repositories of mass spectrometry (MS)-based proteomics data and it is a founding member of the ProteomeXchange Consortium of proteomics resources. In the original PRIDE database system, users could access data programmatically by accessing the web services provided by the PRIDE BioMart interface. New REST (REpresentational State Transfer) web services have been developed to serve the most popular functionality provided by BioMart (now discontinued due to data scalability issues) and address the data access requirements of the newly developed PRIDE Archive. Using the API (Application Programming Interface) it is now possible to programmatically query for and retrieve peptide and protein identifications, project and assay metadata and the originally submitted files. Searching and filtering is also possible by metadata information, such as sample details (e.g. species and tissues), instrumentation (mass spectrometer), keywords and other provided annotations. The PRIDE Archive web services were first made available in April 2014. The API has already been adopted by a few applications and standalone tools such as PeptideShaker, PRIDE Inspector, the Unipept web application and the Python-based BioServices package. This application is free and open to all users with no login requirement and can be accessed at http://www.ebi.ac.uk/pride/ws/archive/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. NASA's Global Change Master Directory: Discover and Access Earth Science Data Sets, Related Data Services, and Climate Diagnostics

    NASA Technical Reports Server (NTRS)

    Aleman, Alicia; Olsen, Lola; Ritz, Scott; Morahan, Michael; Cepero, Laurel; Stevens, Tyler

    2011-01-01

    NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide. The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs. Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information. In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries. By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways. This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data. http://gcmd.nasa.gov/

  11. Metabolomics Workbench: An international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools.

    PubMed

    Sud, Manish; Fahy, Eoin; Cotter, Dawn; Azam, Kenan; Vadivelu, Ilango; Burant, Charles; Edison, Arthur; Fiehn, Oliver; Higashi, Richard; Nair, K Sreekumaran; Sumner, Susan; Subramaniam, Shankar

    2016-01-04

    The Metabolomics Workbench, available at www.metabolomicsworkbench.org, is a public repository for metabolomics metadata and experimental data spanning various species and experimental platforms, metabolite standards, metabolite structures, protocols, tutorials, and training material and other educational resources. It provides a computational platform to integrate, analyze, track, deposit and disseminate large volumes of heterogeneous data from a wide variety of metabolomics studies including mass spectrometry (MS) and nuclear magnetic resonance spectrometry (NMR) data spanning over 20 different species covering all the major taxonomic categories including humans and other mammals, plants, insects, invertebrates and microorganisms. Additionally, a number of protocols are provided for a range of metabolite classes, sample types, and both MS and NMR-based studies, along with a metabolite structure database. The metabolites characterized in the studies available on the Metabolomics Workbench are linked to chemical structures in the metabolite structure database to facilitate comparative analysis across studies. The Metabolomics Workbench, part of the data coordinating effort of the National Institute of Health (NIH) Common Fund's Metabolomics Program, provides data from the Common Fund's Metabolomics Resource Cores, metabolite standards, and analysis tools to the wider metabolomics community and seeks data depositions from metabolomics researchers across the world. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Event Recording Data Acquisition System and Experiment Data Management System for Neutron Experiments at MLF, J-PARC

    NASA Astrophysics Data System (ADS)

    Nakatani, T.; Inamura, Y.; Moriyama, K.; Ito, T.; Muto, S.; Otomo, T.

    Neutron scattering can be a powerful probe in the investigation of many phenomena in the materials and life sciences. The Materials and Life Science Experimental Facility (MLF) at the Japan Proton Accelerator Research Complex (J-PARC) is a leading center of experimental neutron science and boasts one of the most intense pulsed neutron sources in the world. The MLF currently has 18 experimental instruments in operation that support a wide variety of users from across a range of research fields. The instruments include optical elements, sample environment apparatus and detector systems that are controlled and monitored electronically throughout an experiment. Signals from these components and those from the neutron source are converted into a digital format by the data acquisition (DAQ) electronics and recorded as time-tagged event data in the DAQ computers using "DAQ-Middleware". Operating in event mode, the DAQ system produces extremely large data files (˜GB) under various measurement conditions. Simultaneously, the measurement meta-data indicating each measurement condition is recorded in XML format by the MLF control software framework "IROHA". These measurement event data and meta-data are collected in the MLF common storage and cataloged by the MLF Experimental Database (MLF EXP-DB) based on a commercial XML database. The system provides a web interface for users to manage and remotely analyze experimental data.

  13. Enhancing AstroInformatics and Science Discovery from Data in Journal Articles

    NASA Astrophysics Data System (ADS)

    Mazzarella, Joseph

    2011-05-01

    Traditional methods of publishing scientific data and metadata in journal articles are in need of major upgrades to reach the full potential of astronomical databases and astroinformatics techniques to facilitate semi-automated, and eventually autonomous, methods of science discovery. I will review a growing collaboration involving the NASA/IPAC Extragalactic Database (NED), the Astrophysics Data System (ADS), the Virtual Astronomical Observatory (VAO), the AAS Journals and IOP, and the Data Conservancy that is aimed toward transforming the methodology used to publish, capture and link data associated with astrophysics journal articles. We are planning a web-based workflow to assist astronomers during the publication of journal articles. The primary goals are to facilitate the application of structure and standards to (meta)data, reduce errors, remove ambiguities in the identification of astrophysical objects and regions of sky, capture and preserve the images and spectral data files used to make plots, and accelerate the ingestion of the data into relevant repositories, search engines and integration services. The outcome of this community wide effort will address a recent public policy mandate to publish scientific data in open formats to allow reproducibility of results and to facilitate new discoveries. Equally important, this work has the potential to usher in a new wave of science discovery based on seamless connectivity between data relationships that are continuously growing in size and complexity, and increasingly sophisticated data visualization and analysis applications.

  14. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  15. A global approach to analysis and interpretation of metabolic data for plant natural product discovery.

    PubMed

    Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J; Wurtele, Eve Syrkin

    2013-04-01

    Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publicly available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these datasets with transcriptomic data to create hypotheses concerning specialized metabolisms that generate the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software.

  16. A global approach to analysis and interpretation of metabolic data for plant natural product discovery†

    PubMed Central

    Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J.

    2013-01-01

    Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publically available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these dataset with transcriptomic data to create hypotheses concerning specialized metabolism that generates the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software. PMID:23447050

  17. Cronobacter, the emergent bacterial pathogen Enterobacter sakazakii comes of age; MLST and whole genome sequence analysis.

    PubMed

    Forsythe, Stephen J; Dickins, Benjamin; Jolley, Keith A

    2014-12-16

    Following the association of Cronobacter spp. to several publicized fatal outbreaks in neonatal intensive care units of meningitis and necrotising enterocolitis, the World Health Organization (WHO) in 2004 requested the establishment of a molecular typing scheme to enable the international control of the organism. This paper presents the application of Next Generation Sequencing (NGS) to Cronobacter which has led to the establishment of the Cronobacter PubMLST genome and sequence definition database (http://pubmlst.org/cronobacter/) containing over 1000 isolates with metadata along with the recognition of specific clonal lineages linked to neonatal meningitis and adult infections Whole genome sequencing and multilocus sequence typing (MLST) has supports the formal recognition of the genus Cronobacter composed of seven species to replace the former single species Enterobacter sakazakii. Applying the 7-loci MLST scheme to 1007 strains revealed 298 definable sequence types, yet only C. sakazakii clonal complex 4 (CC4) was principally associated with neonatal meningitis. This clonal lineage has been confirmed using ribosomal-MLST (51-loci) and whole genome-MLST (1865 loci) to analyse 107 whole genomes via the Cronobacter PubMLST database. This database has enabled the retrospective analysis of historic cases and outbreaks following re-identification of those strains. The Cronobacter PubMLST database offers a central, open access, reliable sequence-based repository for researchers. It has the capacity to create new analysis schemes 'on the fly', and to integrate metadata (source, geographic distribution, clinical presentation). It is also expandable and adaptable to changes in taxonomy, and able to support the development of reliable detection methods of use to industry and regulatory authorities. Therefore it meets the WHO (2004) request for the establishment of a typing scheme for this emergent bacterial pathogen. Whole genome sequencing has additionally shown a range of potential virulence and environmental fitness traits which may account for the association of C. sakazakii CC4 pathogenicity, and propensity for neonatal CNS.

  18. Curating and Preserving the Big Canopy Database System: an Active Curation Approach using SEAD

    NASA Astrophysics Data System (ADS)

    Myers, J.; Cushing, J. B.; Lynn, P.; Weiner, N.; Ovchinnikova, A.; Nadkarni, N.; McIntosh, A.

    2015-12-01

    Modern research is increasingly dependent upon highly heterogeneous data and on the associated cyberinfrastructure developed to organize, analyze, and visualize that data. However, due to the complexity and custom nature of such combined data-software systems, it can be very challenging to curate and preserve them for the long term at reasonable cost and in a way that retains their scientific value. In this presentation, we describe how this challenge was met in preserving the Big Canopy Database (CanopyDB) system using an agile approach and leveraging the Sustainable Environment - Actionable Data (SEAD) DataNet project's hosted data services. The CanopyDB system was developed over more than a decade at Evergreen State College to address the needs of forest canopy researchers. It is an early yet sophisticated exemplar of the type of system that has become common in biological research and science in general, including multiple relational databases for different experiments, a custom database generation tool used to create them, an image repository, and desktop and web tools to access, analyze, and visualize this data. SEAD provides secure project spaces with a semantic content abstraction (typed content with arbitrary RDF metadata statements and relationships to other content), combined with a standards-based curation and publication pipeline resulting in packaged research objects with Digital Object Identifiers. Using SEAD, our cross-project team was able to incrementally ingest CanopyDB components (images, datasets, software source code, documentation, executables, and virtualized services) and to iteratively define and extend the metadata and relationships needed to document them. We believe that both the process, and the richness of the resultant standards-based (OAI-ORE) preservation object, hold lessons for the development of best-practice solutions for preserving scientific data in association with the tools and services needed to derive value from it.

  19. A TEX86 surface sediment database and extended Bayesian calibration

    NASA Astrophysics Data System (ADS)

    Tierney, Jessica E.; Tingley, Martin P.

    2015-06-01

    Quantitative estimates of past temperature changes are a cornerstone of paleoclimatology. For a number of marine sediment-based proxies, the accuracy and precision of past temperature reconstructions depends on a spatial calibration of modern surface sediment measurements to overlying water temperatures. Here, we present a database of 1095 surface sediment measurements of TEX86, a temperature proxy based on the relative cyclization of marine archaeal glycerol dialkyl glycerol tetraether (GDGT) lipids. The dataset is archived in a machine-readable format with geospatial information, fractional abundances of lipids (if available), and metadata. We use this new database to update surface and subsurface temperature calibration models for TEX86 and demonstrate the applicability of the TEX86 proxy to past temperature prediction. The TEX86 database confirms that surface sediment GDGT distribution has a strong relationship to temperature, which accounts for over 70% of the variance in the data. Future efforts, made possible by the data presented here, will seek to identify variables with secondary relationships to GDGT distributions, such as archaeal community composition.

  20. Interactive Multi-Instrument Database of Solar Flares

    NASA Technical Reports Server (NTRS)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  1. Designing a multi-petabyte database for LSST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becla, J; Hanushevsky, A

    2005-12-21

    The 3.2 giga-pixel LSST camera will produce over half a petabyte of raw images every month. This data needs to be reduced in under a minute to produce real-time transient alerts, and then cataloged and indexed to allow efficient access and simplify further analysis. The indexed catalogs alone are expected to grow at a speed of about 600 terabytes per year. The sheer volume of data, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require cutting-edge techniques to build an efficient data access system at reasonable cost. As currently envisioned, the system will rely on amore » database for catalogs and metadata. Several database systems are being evaluated to understand how they will scale and perform at these data volumes in anticipated LSST access patterns. This paper describes the LSST requirements, the challenges they impose, the data access philosophy, and the database architecture that is expected to be adopted in order to meet the data challenges.« less

  2. Simplified Metadata Curation via the Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  3. Enriched Video Semantic Metadata: Authorization, Integration, and Presentation.

    ERIC Educational Resources Information Center

    Mu, Xiangming; Marchionini, Gary

    2003-01-01

    Presents an enriched video metadata framework including video authorization using the Video Annotation and Summarization Tool (VAST)-a video metadata authorization system that integrates both semantic and visual metadata-- metadata integration, and user level applications. Results demonstrated that the enriched metadata were seamlessly…

  4. Abandoned Uranium Mines (AUM) Site Screening Map Service, 2016, US EPA Region 9

    EPA Pesticide Factsheets

    As described in detail in the Five-Year Report, US EPA completed on-the-ground screening of 521 abandoned uranium mine areas. US EPA and the Navajo EPA are using the Comprehensive Database and Atlas to determine which mines should be cleaned up first. US EPA continues to research and identify Potentially Responsible Parties (PRPs) under Superfund to contribute to the costs of cleanup efforts.This US EPA Region 9 web service contains the following map layers:Abandoned Uranium Mines, Priority Mines, Tronox Mines, Navajo Environmental Response Trust Mines, Mines with Enforcement Actions, Superfund AUM Regions, Navajo Nation Administrative Boundaries and Chapter Houses.Mine points have a maximum scale of 1:220,000, while Mine polygons have a minimum scale of 1:220,000. Chapter houses have a minimum scale of 1:200,000. BLM Land Status has a minimum scale of 1:150,000.Full FGDC metadata records for each layer can be found by clicking the layer name at the web service endpoint and viewing the layer description. Data used to create this web service are available for download at https://edg.epa.gov/metadata/catalog/data/data.page.Security Classification: Public. Access Constraints: None. Use Constraints: None. Please check sources, scale, accuracy, currentness and other available information. Please confirm that you are using the most recent copy of both data and metadata. Acknowledgement of the EPA would be appreciated.

  5. The GOLM-database standard- a framework for time-series data management based on free software

    NASA Astrophysics Data System (ADS)

    Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.

    2009-04-01

    Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.

  6. Forensic Tools to Track and Connect Physical Samples to Related Data

    NASA Astrophysics Data System (ADS)

    Molineux, A.; Thompson, A. C.; Baumgardner, R. W.

    2016-12-01

    Identifiers, such as local sample numbers, are critical to successfully connecting physical samples and related data. However, identifiers must be globally unique. The International Geo Sample Number (IGSN) generated when registering the sample in the System for Earth Sample Registration (SESAR) provides a globally unique alphanumeric code associated with basic metadata, related samples and their current physical storage location. When registered samples are published, users can link the figured samples to the basic metadata held at SESAR. The use cases we discuss include plant specimens from a Permian core, Holocene corals and derived powders, and thin sections with SEM stubs. Much of this material is now published. The plant taxonomic study from the core is a digital pdf and samples can be directly linked from the captions to the SESAR record. The study of stable isotopes from the corals is not yet digitally available, but individual samples are accessible. Full data and media records for both studies are located in our database where higher quality images, field notes, and section diagrams may exist. Georeferences permit mapping in current and deep time plate configurations. Several aspects emerged during this study. The first, ensure adequate and consistent details are registered with SESAR. Second, educate and encourage the researcher to obtain IGSNs. Third, publish the archive numbers, assigned prior to publication, alongside the IGSN. This provides access to further data through an Integrated Publishing Toolkit (IPT)/aggregators/or online repository databases, thus placing the initial sample in a much richer context for future studies. Fourth, encourage software developers to customize community software to extract data from a database and use it to register samples in bulk. This would improve workflow and provide a path for registration of large legacy collections.

  7. BrainMap VBM: An environment for structural meta-analysis.

    PubMed

    Vanasse, Thomas J; Fox, P Mickle; Barron, Daniel S; Robertson, Michaela; Eickhoff, Simon B; Lancaster, Jack L; Fox, Peter T

    2018-05-02

    The BrainMap database is a community resource that curates peer-reviewed, coordinate-based human neuroimaging literature. By pairing the results of neuroimaging studies with their relevant meta-data, BrainMap facilitates coordinate-based meta-analysis (CBMA) of the neuroimaging literature en masse or at the level of experimental paradigm, clinical disease, or anatomic location. Initially dedicated to the functional, task-activation literature, BrainMap is now expanding to include voxel-based morphometry (VBM) studies in a separate sector, titled: BrainMap VBM. VBM is a whole-brain, voxel-wise method that measures significant structural differences between or within groups which are reported as standardized, peak x-y-z coordinates. Here we describe BrainMap VBM, including the meta-data structure, current data volume, and automated reverse inference functions (region-to-disease profile) of this new community resource. CBMA offers a robust methodology for retaining true-positive and excluding false-positive findings across studies in the VBM literature. As with BrainMap's functional database, BrainMap VBM may be synthesized en masse or at the level of clinical disease or anatomic location. As a use-case scenario for BrainMap VBM, we illustrate a trans-diagnostic data-mining procedure wherein we explore the underlying network structure of 2,002 experiments representing over 53,000 subjects through independent components analysis (ICA). To reduce data-redundancy effects inherent to any database, we demonstrate two data-filtering approaches that proved helpful to ICA. Finally, we apply hierarchical clustering analysis (HCA) to measure network- and disease-specificity. This procedure distinguished psychiatric from neurological diseases. We invite the neuroscientific community to further exploit BrainMap VBM with other modeling approaches. © 2018 Wiley Periodicals, Inc.

  8. Making geospatial data in ASF archive readily accessible

    NASA Astrophysics Data System (ADS)

    Gens, R.; Hogenson, K.; Wolf, V. G.; Drew, L.; Stern, T.; Stoner, M.; Shapran, M.

    2015-12-01

    The way geospatial data is searched, managed, processed and used has changed significantly in recent years. A data archive such as the one at the Alaska Satellite Facility (ASF), one of NASA's twelve interlinked Distributed Active Archive Centers (DAACs), used to be searched solely via user interfaces that were specifically developed for its particular archive and data sets. ASF then moved to using an application programming interface (API) that defined a set of routines, protocols, and tools for distributing the geospatial information stored in the database in real time. This provided a more flexible access to the geospatial data. Yet, it was up to user to develop the tools to get a more tailored access to the data they needed. We present two new approaches for serving data to users. In response to the recent Nepal earthquake we developed a data feed for distributing ESA's Sentinel data. Users can subscribe to the data feed and are provided with the relevant metadata the moment a new data set is available for download. The second approach was an Open Geospatial Consortium (OGC) web feature service (WFS). The WFS hosts the metadata along with a direct link from which the data can be downloaded. It uses the open-source GeoServer software (Youngblood and Iacovella, 2013) and provides an interface to include the geospatial information in the archive directly into the user's geographic information system (GIS) as an additional data layer. Both services are run on top of a geospatial PostGIS database, an open-source geographic extension for the PostgreSQL object-relational database (Marquez, 2015). Marquez, A., 2015. PostGIS essentials. Packt Publishing, 198 p. Youngblood, B. and Iacovella, S., 2013. GeoServer Beginner's Guide, Packt Publishing, 350 p.

  9. PHENOPSIS DB: an Information System for Arabidopsis thaliana phenotypic data in an environmental context

    PubMed Central

    2011-01-01

    Background Renewed interest in plant × environment interactions has risen in the post-genomic era. In this context, high-throughput phenotyping platforms have been developed to create reproducible environmental scenarios in which the phenotypic responses of multiple genotypes can be analysed in a reproducible way. These platforms benefit hugely from the development of suitable databases for storage, sharing and analysis of the large amount of data collected. In the model plant Arabidopsis thaliana, most databases available to the scientific community contain data related to genetic and molecular biology and are characterised by an inadequacy in the description of plant developmental stages and experimental metadata such as environmental conditions. Our goal was to develop a comprehensive information system for sharing of the data collected in PHENOPSIS, an automated platform for Arabidopsis thaliana phenotyping, with the scientific community. Description PHENOPSIS DB is a publicly available (URL: http://bioweb.supagro.inra.fr/phenopsis/) information system developed for storage, browsing and sharing of online data generated by the PHENOPSIS platform and offline data collected by experimenters and experimental metadata. It provides modules coupled to a Web interface for (i) the visualisation of environmental data of an experiment, (ii) the visualisation and statistical analysis of phenotypic data, and (iii) the analysis of Arabidopsis thaliana plant images. Conclusions Firstly, data stored in the PHENOPSIS DB are of interest to the Arabidopsis thaliana community, particularly in allowing phenotypic meta-analyses directly linked to environmental conditions on which publications are still scarce. Secondly, data or image analysis modules can be downloaded from the Web interface for direct usage or as the basis for modifications according to new requirements. Finally, the structure of PHENOPSIS DB provides a useful template for the development of other similar databases related to genotype × environment interactions. PMID:21554668

  10. A data skimming service for locally resident analysis data

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Gardner, R. W.; Gieraltowski, J.; Malon, D.; Mambelli, M.; May, E.

    2008-07-01

    A Data Skimming Service (DSS) is a site-level service for rapid event filtering and selection from locally resident datasets based on metadata queries to associated 'tag' databases. In US ATLAS, we expect most if not all of the AOD-based datasets to be replicated to each of the five Tier 2 regional facilities in the US Tier 1 'cloud' coordinated by Brookhaven National Laboratory. Entire datasets will consist of on the order of several terabytes of data, and providing easy, quick access to skimmed subsets of these data will be vital to physics working groups. Typically, physicists will be interested in portions of the complete datasets, selected according to event-level attributes (number of jets, missing Et, etc) and content (specific analysis objects for subsequent processing). In this paper we describe methods used to classify data (metadata tag generation) and to store these results in a local database. Next we discuss a general framework which includes methods for accessing this information, defining skims, specifying event output content, accessing locally available storage through a variety of interfaces (SRM, dCache/dccp, gridftp), accessing remote storage elements as specified, and user job submission tools through local or grid schedulers. The advantages of the DSS are the ability to quickly 'browse' datasets and design skims, for example, pre-adjusting cuts to get to a desired skim level with minimal use of compute resources, and to encode these analysis operations in a database for re-analysis and archival purposes. Additionally the framework has provisions to operate autonomously in the event that external, central resources are not available, and to provide, as a reduced package, a minimal skimming service tailored to the needs of small Tier 3 centres or individual users.

  11. Array Databases: Agile Analytics (not just) for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    Baumann, P.; Misev, D.

    2015-12-01

    Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.

  12. A central database for the Global Terrestrial Network for Permafrost (GTN-P)

    NASA Astrophysics Data System (ADS)

    Elger, Kirsten; Lanckman, Jean-Pierre; Lantuit, Hugues; Karlsson, Ævar Karl; Johannsson, Halldór

    2013-04-01

    The Global Terrestrial Network for Permafrost (GTN-P) is the primary international observing network for permafrost sponsored by the Global Climate Observing System (GCOS) and the Global Terrestrial Observing System (GTOS), and managed by the International Permafrost Association (IPA). It monitors the Essential Climate Variable (ECV) permafrost that consists of permafrost temperature and active-layer thickness, with the long-term goal of obtaining a comprehensive view of the spatial structure, trends, and variability of changes in the active layer and permafrost. The network's two international monitoring components are (1) CALM (Circumpolar Active Layer Monitoring) and the (2) Thermal State of Permafrost (TSP), which is made of an extensive borehole-network covering all permafrost regions. Both programs have been thoroughly overhauled during the International Polar Year 2007-2008 and extended their coverage to provide a true circumpolar network stretching over both Hemispheres. GTN-P has gained considerable visibility in the science community in providing the baseline against which models are globally validated and incorporated in climate assessments. Yet it was until now operated on a voluntary basis, and is now being redesigned to meet the increasing expectations from the science community. To update the network's objectives and deliver the best possible products to the community, the IPA organized a workshop to define the user's needs and requirements for the production, archival, storage and dissemination of the permafrost data products it manages. From the beginning on, GNT-P data was "outfitted" with an open data policy with free data access via the World Wide Web. The existing data, however, is far from being homogeneous: is not yet optimized for databases, there is no framework for data reporting or archival and data documentation is incomplete. As a result, and despite the utmost relevance of permafrost in the Earth's climate system, the data has not been used by as many researchers as intended by the initiators of these global programs. The European Union project PAGE21 created opportunities to develop this central database for GTN-P data during the duration of the project and beyond. The database aims to be the one location where the researcher can find data, metadata and information of all relevant parameters for a specific site. Each component of the Data Management System (DMS), including parameters, data levels and metadata formats were developed in cooperation with GTN-P and the IPA. The general framework of the GTN-P DMS is based on an object-oriented model (OOM) and implemented into a spatial database. To ensure interoperability and enable potential inter-database search, field names are following international metadata standards. The outputs of the DMS will be tailored to the needs of the modeling community but also to the ones of other stakeholders. In particular, new products will be developed in partnership with the IPA and other relevant international organizations to raise awareness on permafrost in the policy-making arena. The DMS will be released to a broader public in May 2013 and we expect to have the first active data upload - via an online interface - after 2013's summer field season.

  13. The DNA Data Bank of Japan launches a new resource, the DDBJ Omics Archive of functional genomics experiments.

    PubMed

    Kodama, Yuichi; Mashima, Jun; Kaminuma, Eli; Gojobori, Takashi; Ogasawara, Osamu; Takagi, Toshihisa; Okubo, Kousaku; Nakamura, Yasukazu

    2012-01-01

    The DNA Data Bank of Japan (DDBJ; http://www.ddbj.nig.ac.jp) maintains and provides archival, retrieval and analytical resources for biological information. The central DDBJ resource consists of public, open-access nucleotide sequence databases including raw sequence reads, assembly information and functional annotation. Database content is exchanged with EBI and NCBI within the framework of the International Nucleotide Sequence Database Collaboration (INSDC). In 2011, DDBJ launched two new resources: the 'DDBJ Omics Archive' (DOR; http://trace.ddbj.nig.ac.jp/dor) and BioProject (http://trace.ddbj.nig.ac.jp/bioproject). DOR is an archival database of functional genomics data generated by microarray and highly parallel new generation sequencers. Data are exchanged between the ArrayExpress at EBI and DOR in the common MAGE-TAB format. BioProject provides an organizational framework to access metadata about research projects and the data from the projects that are deposited into different databases. In this article, we describe major changes and improvements introduced to the DDBJ services, and the launch of two new resources: DOR and BioProject.

  14. A geospatial database model for the management of remote sensing datasets at multiple spectral, spatial, and temporal scales

    NASA Astrophysics Data System (ADS)

    Ifimov, Gabriela; Pigeau, Grace; Arroyo-Mora, J. Pablo; Soffer, Raymond; Leblanc, George

    2017-10-01

    In this study the development and implementation of a geospatial database model for the management of multiscale datasets encompassing airborne imagery and associated metadata is presented. To develop the multi-source geospatial database we have used a Relational Database Management System (RDBMS) on a Structure Query Language (SQL) server which was then integrated into ArcGIS and implemented as a geodatabase. The acquired datasets were compiled, standardized, and integrated into the RDBMS, where logical associations between different types of information were linked (e.g. location, date, and instrument). Airborne data, at different processing levels (digital numbers through geocorrected reflectance), were implemented in the geospatial database where the datasets are linked spatially and temporally. An example dataset consisting of airborne hyperspectral imagery, collected for inter and intra-annual vegetation characterization and detection of potential hydrocarbon seepage events over pipeline areas, is presented. Our work provides a model for the management of airborne imagery, which is a challenging aspect of data management in remote sensing, especially when large volumes of data are collected.

  15. The CMS DBS query language

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee

    2010-04-01

    The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.

  16. Assessing Metadata Quality of a Federally Sponsored Health Data Repository.

    PubMed

    Marc, David T; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn't frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality.

  17. Assessing Metadata Quality of a Federally Sponsored Health Data Repository

    PubMed Central

    Marc, David T.; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn’t frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality. PMID:28269883

  18. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies.

    PubMed

    Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.

  19. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  20. Metadata-driven Clinical Data Loading into i2b2 for Clinical and Translational Science Institutes.

    PubMed

    Post, Andrew R; Pai, Akshatha K; Willard, Richard; May, Bradley J; West, Andrew C; Agravat, Sanjay; Granite, Stephen J; Winslow, Raimond L; Stephens, David S

    2016-01-01

    Clinical and Translational Science Award (CTSA) recipients have a need to create research data marts from their clinical data warehouses, through research data networks and the use of i2b2 and SHRINE technologies. These data marts may have different data requirements and representations, thus necessitating separate extract, transform and load (ETL) processes for populating each mart. Maintaining duplicative procedural logic for each ETL process is onerous. We have created an entirely metadata-driven ETL process that can be customized for different data marts through separate configurations, each stored in an extension of i2b2 's ontology database schema. We extended our previously reported and open source Eureka! Clinical Analytics software with this capability. The same software has created i2b2 data marts for several projects, the largest being the nascent Accrual for Clinical Trials (ACT) network, for which it has loaded over 147 million facts about 1.2 million patients.

  1. Metadata-driven Clinical Data Loading into i2b2 for Clinical and Translational Science Institutes

    PubMed Central

    Post, Andrew R.; Pai, Akshatha K.; Willard, Richard; May, Bradley J.; West, Andrew C.; Agravat, Sanjay; Granite, Stephen J.; Winslow, Raimond L.; Stephens, David S.

    2016-01-01

    Clinical and Translational Science Award (CTSA) recipients have a need to create research data marts from their clinical data warehouses, through research data networks and the use of i2b2 and SHRINE technologies. These data marts may have different data requirements and representations, thus necessitating separate extract, transform and load (ETL) processes for populating each mart. Maintaining duplicative procedural logic for each ETL process is onerous. We have created an entirely metadata-driven ETL process that can be customized for different data marts through separate configurations, each stored in an extension of i2b2 ‘s ontology database schema. We extended our previously reported and open source Eureka! Clinical Analytics software with this capability. The same software has created i2b2 data marts for several projects, the largest being the nascent Accrual for Clinical Trials (ACT) network, for which it has loaded over 147 million facts about 1.2 million patients. PMID:27570667

  2. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation.

    PubMed

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.

  3. Small values in big data: The continuing need for appropriate metadata

    USGS Publications Warehouse

    Stow, Craig A.; Webster, Katherine E.; Wagner, Tyler; Lottig, Noah R.; Soranno, Patricia A.; Cha, YoonKyung

    2018-01-01

    Compiling data from disparate sources to address pressing ecological issues is increasingly common. Many ecological datasets contain left-censored data – observations below an analytical detection limit. Studies from single and typically small datasets show that common approaches for handling censored data — e.g., deletion or substituting fixed values — result in systematic biases. However, no studies have explored the degree to which the documentation and presence of censored data influence outcomes from large, multi-sourced datasets. We describe left-censored data in a lake water quality database assembled from 74 sources and illustrate the challenges of dealing with small values in big data, including detection limits that are absent, range widely, and show trends over time. We show that substitutions of censored data can also bias analyses using ‘big data’ datasets, that censored data can be effectively handled with modern quantitative approaches, but that such approaches rely on accurate metadata that describe treatment of censored data from each source.

  4. TokSearch: A search engine for fusion experimental data

    DOE PAGES

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.; ...

    2018-04-01

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  5. TokSearch: A search engine for fusion experimental data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sammuli, Brian S.; Barr, Jayson L.; Eidietis, Nicholas W.

    At a typical fusion research site, experimental data is stored using archive technologies that deal with each discharge as an independent set of data. These technologies (e.g. MDSplus or HDF5) are typically supplemented with a database that aggregates metadata for multiple shots to allow for efficient querying of certain predefined quantities. Often, however, a researcher will need to extract information from the archives, possibly for many shots, that is not available in the metadata store or otherwise indexed for quick retrieval. To address this need, a new search tool called TokSearch has been added to the General Atomics TokSys controlmore » design and analysis suite [1]. This tool provides the ability to rapidly perform arbitrary, parallelized queries of archived tokamak shot data (both raw and analyzed) over large numbers of shots. The TokSearch query API borrows concepts from SQL, and users can choose to implement queries in either MatlabTM or Python.« less

  6. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  7. Integration of upper air data in the MeteoSwiss Data Warehouse

    NASA Astrophysics Data System (ADS)

    Musa, M.; Haeberli, Ch.; Ruffieux, D.

    2010-09-01

    Over the last 10 years MeteoSwiss established a Data Warehouse in order to get one single, integrated data platform for all kinds of meteorological and climatological data. In the MeteoSwiss Data Warehouse data and metadata are hold in a metadata driven relational database. To reach this goal, we started with the integration of the actual and historical data from our surface stations in a first step, including routines for aggregation and calculation and the implementation of enhanced Quality Control tools. In 2008 we started with the integration of actual and historical upper air data like soundings (PTU, Wind and Ozone), any kind of profilers like wind profiler or radiometer, profiles calculated from numerical weather models and AMDAR data in the Data Warehouse. The dataset includes also high resolution sounding data from the station Payerne and TEMP data from 20 European stations since 1942. A critical point was to work out a concept for the general architecture which could deal with all different types of data. While integrating the data itself all metadata of the aerological station Payerne was transferred and imported in the central metadata repository of the Data Warehouse. The implementation of the real time and daily QC tools as well as the routines for aggregation and calculation were realized in an analog way as for the surface data. The Quality Control tools include plausibility tests like limit tests, consistency tests in the same level and vertical consistency tests. From the beginning it was the aim to support the MeteoSwiss integration strategy which deals with all aspects of integration like various observing technologies and platforms, observing systems outside MeteoSwiss and the data and metadata itself. This kind of integration comprises all aspects of "Enterprise Data Integration". After the integration, the historical as well as the actual upper air data are now available for the climatologists and meteorologists with standardized access for data retrieving and visualization. We are convinced making these data accessible for the scientist is a good contribution to a better understanding of high resolution climatology.

  8. A digital repository with an extensible data model for biobanking and genomic analysis management.

    PubMed

    Izzo, Massimiliano; Mortola, Francesco; Arnulfo, Gabriele; Fato, Marco M; Varesio, Luigi

    2014-01-01

    Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid.

  9. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to generate derived modeling products. In particular, we discuss the generation and service delivery of provenance, or trace of data sources and analytical methods used in a scientific analysis, for archived data. We discuss the workflows developed by EDAC to capture end-to-end provenance, the storage model for those data in a delivery format independent data structure, and delivery of PML, ISO, and FGDC documents to clients requesting those products.

  10. A digital repository with an extensible data model for biobanking and genomic analysis management

    PubMed Central

    2014-01-01

    Motivation Molecular biology laboratories require extensive metadata to improve data collection and analysis. The heterogeneity of the collected metadata grows as research is evolving in to international multi-disciplinary collaborations and increasing data sharing among institutions. Single standardization is not feasible and it becomes crucial to develop digital repositories with flexible and extensible data models, as in the case of modern integrated biobanks management. Results We developed a novel data model in JSON format to describe heterogeneous data in a generic biomedical science scenario. The model is built on two hierarchical entities: processes and events, roughly corresponding to research studies and analysis steps within a single study. A number of sequential events can be grouped in a process building up a hierarchical structure to track patient and sample history. Each event can produce new data. Data is described by a set of user-defined metadata, and may have one or more associated files. We integrated the model in a web based digital repository with a data grid storage to manage large data sets located in geographically distinct areas. We built a graphical interface that allows authorized users to define new data types dynamically, according to their requirements. Operators compose queries on metadata fields using a flexible search interface and run them on the database and on the grid. We applied the digital repository to the integrated management of samples, patients and medical history in the BIT-Gaslini biobank. The platform currently manages 1800 samples of over 900 patients. Microarray data from 150 analyses are stored on the grid storage and replicated on two physical resources for preservation. The system is equipped with data integration capabilities with other biobanks for worldwide information sharing. Conclusions Our data model enables users to continuously define flexible, ad hoc, and loosely structured metadata, for information sharing in specific research projects and purposes. This approach can improve sensitively interdisciplinary research collaboration and allows to track patients' clinical records, sample management information, and genomic data. The web interface allows the operators to easily manage, query, and annotate the files, without dealing with the technicalities of the data grid. PMID:25077808

  11. Partnerships To Mine Unexploited Sources of Metadata.

    ERIC Educational Resources Information Center

    Reynolds, Regina Romano

    This paper discusses the metadata created for other purposes as a potential source of bibliographic data. The first section addresses collecting metadata by means of templates, including the Nordic Metadata Project's Dublin Core Metadata Template. The second section considers potential partnerships for re-purposing metadata for bibliographic use,…

  12. A Window to the World: Lessons Learned from NASA's Collaborative Metadata Curation Effort

    NASA Astrophysics Data System (ADS)

    Bugbee, K.; Dixon, V.; Baynes, K.; Shum, D.; le Roux, J.; Ramachandran, R.

    2017-12-01

    Well written descriptive metadata adds value to data by making data easier to discover as well as increases the use of data by providing the context or appropriateness of use. While many data centers acknowledge the importance of correct, consistent and complete metadata, allocating resources to curate existing metadata is often difficult. To lower resource costs, many data centers seek guidance on best practices for curating metadata but struggle to identify those recommendations. In order to assist data centers in curating metadata and to also develop best practices for creating and maintaining metadata, NASA has formed a collaborative effort to improve the Earth Observing System Data and Information System (EOSDIS) metadata in the Common Metadata Repository (CMR). This effort has taken significant steps in building consensus around metadata curation best practices. However, this effort has also revealed gaps in EOSDIS enterprise policies and procedures within the core metadata curation task. This presentation will explore the mechanisms used for building consensus on metadata curation, the gaps identified in policies and procedures, the lessons learned from collaborating with both the data centers and metadata curation teams, and the proposed next steps for the future.

  13. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  14. samiDB: A Prototype Data Archive for Big Science Exploration

    NASA Astrophysics Data System (ADS)

    Konstantopoulos, I. S.; Green, A. W.; Cortese, L.; Foster, C.; Scott, N.

    2015-04-01

    samiDB is an archive, database, and query engine to serve the spectra, spectral hypercubes, and high-level science products that make up the SAMI Galaxy Survey. Based on the versatile Hierarchical Data Format (HDF5), samiDB does not depend on relational database structures and hence lightens the setup and maintenance load imposed on science teams by metadata tables. The code, written in Python, covers the ingestion, querying, and exporting of data as well as the automatic setup of an HTML schema browser. samiDB serves as a maintenance-light data archive for Big Science and can be adopted and adapted by science teams that lack the means to hire professional archivists to set up the data back end for their projects.

  15. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Astrophysics Data System (ADS)

    Kozimor, J.; Habermann, T.; Powers, L. A.; Gordon, S.

    2016-12-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways. This situation has led to the development of metadata repositories that can ingest and output metadata in multiple dialects. As an operational example, the NASA Common Metadata Repository (CMR) includes three different metadata dialects (DIF, ECHO, and ISO 19115-2). These systems raise a new question for metadata providers: if I have a choice of metadata dialects, which should I use and how do I make that decision? We have developed a collection of metadata evaluation tools that can be used to evaluate metadata records in many dialects for completeness with respect to recommendations from many organizations and communities. We have applied these tools to over 8000 collection and granule metadata records in four different dialects. This large collection of identical content in multiple dialects enables us to address questions about metadata and dialect evolution and to answer those questions quantitatively. We will describe those tools and results from evaluating the NASA CMR metadata collection.

  16. Nuclear data made easily accessible through the Notre Dame Nuclear Database

    NASA Astrophysics Data System (ADS)

    Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.

  17. Acoustic Metadata Management and Transparent Access to Networked Oceanographic Data Sets

    DTIC Science & Technology

    2011-09-30

    Roberts in Pat Halpin’s lab, integrating the Marine Geospatial Ecology (GeoEco) toolset into our database services. While there is a steep...noise bands. The lower box at each site denotes the 1-6 kHz band while the upper box denotes 6-96 kHz band. Lad seamount has deployments at two sites...N00014-11-1-0697 http://cetus.ucsd.edu Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of

  18. Perspectives in astrophysical databases

    NASA Astrophysics Data System (ADS)

    Frailis, Marco; de Angelis, Alessandro; Roberto, Vito

    2004-07-01

    Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.

  19. JADDS - towards a tailored global atmospheric composition data service for CAMS forecasts and reanalysis

    NASA Astrophysics Data System (ADS)

    Stein, Olaf; Schultz, Martin G.; Rambadt, Michael; Saini, Rajveer; Hoffmann, Lars; Mallmann, Daniel

    2017-04-01

    Global model data of atmospheric composition produced by the Copernicus Atmospheric Monitoring Service (CAMS) is collected since 2010 at FZ Jülich and serves as boundary condition for use by Regional Air Quality (RAQ) modellers world-wide. RAQ models need time-resolved meteorological as well as chemical lateral boundary conditions for their individual model domains. While the meteorological data usually come from well-established global forecast systems, the chemical boundary conditions are not always well defined. In the past, many models used 'climatic' boundary conditions for the tracer concentrations, which can lead to significant concentration biases, particularly for tracers with longer lifetimes which can be transported over long distances (e.g. over the whole northern hemisphere) with the mean wind. The Copernicus approach utilizes extensive near-realtime data assimilation of atmospheric composition data observed from space which gives additional reliability to the global modelling data and is well received by the RAQ communities. An existing Web Coverage Service (WCS) for sharing these individually tailored model results is currently being re-engineered to make use of a modern, scalable database technology in order to improve performance, enhance flexibility, and allow the operation of catalogue services. The new Jülich Atmospheric Data Distributions Server (JADDS) adheres to the Web Coverage Service WCS2.0 standard as defined by the Open Geospatial Consortium OGC. This enables the user groups to flexibly define datasets they need by selecting a subset of chemical species or restricting geographical boundaries or the length of the time series. The data is made available in the form of different catalogues stored locally on our server. In addition, the Jülich OWS Interface (JOIN) provides interoperable web services allowing for easy download and visualization of datasets delivered from WCS servers via the internet. We will present the prototype JADDS server and address the major issues identified when relocating large four-dimensional datasets into a RASDAMAN raster array database. So far the RASDAMAN support for data available in netCDF format is limited with respect to metadata related to variables and axes. For community-wide accepted solutions, selected data coverages shall result in downloadable netCDF files including metadata complying with the netCDF CF Metadata Conventions standard (http://cfconventions.org/). This can be achieved by adding custom metadata elements for RASDAMAN bands (model levels) on data ingestion. Furthermore, an optimization strategy for ingestion of several TB of 4D model output data will be outlined.

  20. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.

  1. ISO 19115 Experiences in NASA's Earth Observing System (EOS) ClearingHOuse (ECHO)

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Mitchell, A.

    2011-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing earth science data. As science research and the gathered data increases in complexity, so does the complexity and importance of descriptive metadata. To meet these growing needs, the metadata models required utilize richer and more mature metadata attributes. Categorizing, standardizing, and promulgating these metadata models to a politically, geographically, and scientifically diverse community is a difficult process. An integral component of metadata management within NASA's Earth Observing System Data and Information System (EOSDIS) is the Earth Observing System (EOS) ClearingHOuse (ECHO). ECHO is the core metadata repository for the EOSDIS data centers providing a centralized mechanism for metadata and data discovery and retrieval. ECHO has undertaken an internal restructuring to meet the changing needs of scientists, the consistent advancement in technology, and the advent of new standards such as ISO 19115. These improvements were based on the following tenets for data discovery and retrieval: + There exists a set of 'core' metadata fields recommended for data discovery. + There exists a set of users who will require the entire metadata record for advanced analysis. + There exists a set of users who will require a 'core' set metadata fields for discovery only. + There will never be a cessation of new formats or a total retirement of all old formats. + Users should be presented metadata in a consistent format of their choosing. In order to address the previously listed items, ECHO's new metadata processing paradigm utilizes the following approach: + Identify a cross-format set of 'core' metadata fields necessary for discovery. + Implement format-specific indexers to extract the 'core' metadata fields into an optimized query capability. + Archive the original metadata in its entirety for presentation to users requiring the full record. + Provide on-demand translation of 'core' metadata to any supported result format. Lessons learned by the ECHO team while implementing its new metadata approach to support usage of the ISO 19115 standard will be presented. These lessons learned highlight some discovered strengths and weaknesses in the ISO 19115 standard as it is introduced to an existing metadata processing system.

  2. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    PubMed

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  3. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    NASA Astrophysics Data System (ADS)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  4. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  5. Evolutions in Metadata Quality

    NASA Astrophysics Data System (ADS)

    Gilman, J.

    2016-12-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This talk will cover how we encourage metadata authors to improve the metadata through the use of integrated rubrics of metadata quality and outreach efforts. In addition we'll demonstrate Humanizers, a technique for dealing with the symptoms of metadata issues. Humanizers allow CMR administrators to identify specific metadata issues that are fixed at runtime when the data is indexed. An example Humanizer is the aliasing of processing level "Level 1" to "1" to improve consistency across collections. The CMR currently indexes 35K collections and 300M granules.

  6. Metadata Means Communication: The Challenges of Producing Useful Metadata

    NASA Astrophysics Data System (ADS)

    Edwards, P. N.; Batcheller, A. L.

    2010-12-01

    Metadata are increasingly perceived as an important component of data sharing systems. For instance, metadata accompanying atmospheric model output may indicate the grid size, grid type, and parameter settings used in the model configuration. We conducted a case study of a data portal in the atmospheric sciences using in-depth interviews, document review, and observation. OUr analysis revealed a number of challenges in producing useful metadata. First, creating and managing metadata required considerable effort and expertise, yet responsibility for these tasks was ill-defined and diffused among many individuals, leading to errors, failure to capture metadata, and uncertainty about the quality of the primary data. Second, metadata ended up stored in many different forms and software tools, making it hard to manage versions and transfer between formats. Third, the exact meanings of metadata categories remained unsettled and misunderstood even among a small community of domain experts -- an effect we expect to be exacerbated when scientists from other disciplines wish to use these data. In practice, we found that metadata problems due to these obstacles are often overcome through informal, personal communication, such as conversations or email. We conclude that metadata serve to communicate the context of data production from the people who produce data to those who wish to use it. Thus while formal metadata systems are often public, critical elements of metadata (those embodied in informal communication) may never be recorded. Therefore, efforts to increase data sharing should include ways to facilitate inter-investigator communication. Instead of tackling metadata challenges only on the formal level, we can improve data usability for broader communities by better supporting metadata communication.

  7. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata registry is complete for each metadata hierarchical level, but at the implementation level most of the metadata elements are not stored at both levels but only at more generic one. This communication defines a metadata system that covers 4 levels, describes which metadata has to support series-layer inheritance and in which way, and how hierarchical levels are defined and stored. Metadata elements are classified according to the type of inheritance between products, series, tiles and the datasets. It explains the metadata elements classification and exemplifies it using core metadata elements. The communication also presents a metadata viewer and edition tool that uses the described model to propagate metadata elements and to show to the user a complete set of metadata for each level in a transparent way. This tool is integrated in the MiraMon GIS software.

  8. The role of metadata in managing large environmental science datasets. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  9. Building Format-Agnostic Metadata Repositories

    NASA Astrophysics Data System (ADS)

    Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will discuss the problems that surround persisting and discovering metadata in multiple formats; a set of tenets that must be addressed in a solution; and NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) proposed approach. In order to facilitate cross-discipline data analysis, Earth Scientists will potentially interact with more than one data source. The most common data discovery paradigm relies on services and/or applications facilitating the discovery and presentation of metadata. What may not be common are the formats in which the metadata are formatted. As the number of sources and datasets utilized for research increases, it becomes more likely that a researcher will encounter conflicting metadata formats. Metadata repositories, such as the EOS ClearingHOuse (ECHO), along with data centers, must identify ways to address this issue. In order to define the solution to this problem, the following tenets are identified: - There exists a set of ‘core’ metadata fields recommended for data discovery. - There exists a set of users who will require the entire metadata record for advanced analysis. - There exists a set of users who will require a ‘core’ set of metadata fields for discovery only. - There will never be a cessation of new formats or a total retirement of all old formats. - Users should be presented metadata in a consistent format. ECHO has undertaken an effort to transform its metadata ingest and discovery services in order to support the growing set of metadata formats. In order to address the previously listed items, ECHO’s new metadata processing paradigm utilizes the following approach: - Identify a cross-format set of ‘core’ metadata fields necessary for discovery. - Implement format-specific indexers to extract the ‘core’ metadata fields into an optimized query capability. - Archive the original metadata in its entirety for presentation to users requiring the full record. - Provide on-demand translation of ‘core’ metadata to any supported result format. With this identified approach, the Earth Scientist is provided with a consistent data representation as they interact with a variety of datasets that utilize multiple metadata formats. They are then able to focus their efforts on the more critical research activities which they are undertaking.

  10. Making Metadata Better with CMR and MMT

    NASA Technical Reports Server (NTRS)

    Gilman, Jason Arthur; Shum, Dana

    2016-01-01

    Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.

  11. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    NASA Technical Reports Server (NTRS)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  12. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    USGS Publications Warehouse

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses some of the major data and metadata management challenges that exist across the diverse missions of the bureaus and offices. All employees who create, modify, or use data are involved with data and metadata management. Identifying, establishing, and formalizing the roles and responsibilities associated with metadata management are key to institutionalizing a framework of best practices, methodologies, processes, and common approaches throughout all levels of the organization; these are the foundation for effective data resource management. For executives and managers, metadata management strengthens their overarching views of data assets, holdings, and data interoperability; and clarifies how metadata management can help accelerate the compliance of multiple policy mandates. For employees, data stewards, and data professionals, formalized metadata management will help with the consistency of definitions, and approaches addressing data discoverability, data quality,  and data lineage. In addition to data professionals and others  associated with information technology; data stewards and program subject matter experts take on important metadata management roles and responsibilities as data flow through their respective business and science-related workflows.  The responsibilities of establishing, practicing, and  governing the actions associated with their specific metadata management roles are critical to successful metadata implementation.

  13. The ChArMEx database

    NASA Astrophysics Data System (ADS)

    Ferré, Helene; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Descloitres, Jacques; Fleury, Laurence; Focsa, Loredana; Henriot, Nicolas; Mastrorillo, Laurence; Mière, Arnaud; Vermeulen, Anne

    2014-05-01

    The Chemistry-Aerosol Mediterranean Experiment (ChArMEx, http://charmex.lsce.ipsl.fr/) aims at a scientific assessment of the present and future state of the atmospheric environment in the Mediterranean Basin, and of its impacts on the regional climate, air quality, and marine biogeochemistry. The project includes long term monitoring of environmental parameters, intensive field campaigns, use of satellite data and modelling studies. Therefore ChARMEx scientists produce and need to access a wide diversity of data. In this context, the objective of the database task is to organize data management, distribution system and services, such as facilitating the exchange of information and stimulating the collaboration between researchers within the ChArMEx community, and beyond. The database relies on a strong collaboration between OMP and ICARE data centres and has been set up in the framework of the Mediterranean Integrated Studies at Regional And Locals Scales (MISTRALS) program data portal. All the data produced by or of interest for the ChArMEx community will be documented in the data catalogue and accessible through the database website: http://mistrals.sedoo.fr/ChArMEx. At present, the ChArMEx database contains about 75 datasets, including 50 in situ datasets (2012 and 2013 campaigns, Ersa background monitoring station), 25 model outputs (dust model intercomparison, MEDCORDEX scenarios), and a high resolution emission inventory over the Mediterranean. Many in situ datasets have been inserted in a relational database, in order to enable more accurate data selection and download of different datasets in a shared format. The database website offers different tools: - A registration procedure which enables any scientist to accept the data policy and apply for a user database account. - A data catalogue that complies with metadata international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). - Metadata forms to document observations or products that will be provided to the database. - A search tool to browse the catalogue using thematic, geographic and/or temporal criteria. - A shopping-cart web interface to order in situ data files. - A web interface to select and access to homogenized datasets. Interoperability between the two data centres is being set up using the OPEnDAP protocol. The data portal will soon propose a user-friendly access to satellite products managed by the ICARE data centre (SEVIRI, TRIMM, PARASOL...). In order to meet the operational needs of the airborne and ground based observational teams during the ChArMEx 2012 and 2013 campaigns, a day-to-day chart and report display website has been developed too: http://choc.sedoo.org. It offers a convenient way to browse weather conditions and chemical composition during the campaign periods.

  14. Making Interoperability Easier with the NASA Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  15. A database of paleoceanographic sediment cores from the North Pacific, 1951-2016

    NASA Astrophysics Data System (ADS)

    Borreggine, Marisa; Myhre, Sarah E.; Mislan, K. Allison S.; Deutsch, Curtis; Davis, Catherine V.

    2017-09-01

    We assessed sediment coring, data acquisition, and publications from the North Pacific (north of 30° N) from 1951 to 2016. There are 2134 sediment cores collected by American, French, Japanese, Russian, and international research vessels across the North Pacific (including the Pacific subarctic gyre, Alaskan gyre, Japan margin, and California margin; 1391 cores), the Sea of Okhotsk (271 cores), the Bering Sea (123 cores), and the Sea of Japan (349 cores) reported here. All existing metadata associated with these sediment cores are documented here, including coring date, location, core number, cruise number, water depth, vessel metadata, and coring technology. North Pacific sediment core age models are built with isotope stratigraphy, radiocarbon dating, magnetostratigraphy, biostratigraphy, tephrochronology, % opal, color, and lithological proxies. Here, we evaluate the iterative generation of each published age model and provide comprehensive documentation of the dating techniques used, along with sedimentation rates and age ranges. We categorized cores according to the availability of a variety of proxy evidence, including biological (e.g., benthic and planktonic foraminifera assemblages), geochemical (e.g., major trace element concentrations), isotopic (e.g., bulk sediment nitrogen, oxygen, and carbon isotopes), and stratigraphic (e.g., preserved laminations) proxies. This database is a unique resource to the paleoceanographic and paleoclimate communities and provides cohesive accessibility to sedimentary sequences, age model development, and proxies. The data set is publicly available through PANGAEA at https://doi.org/10.1594/PANGAEA.875998.

  16. Services for Emodnet-Chemistry Data Products

    NASA Astrophysics Data System (ADS)

    Santinelli, Giorgio; Hendriksen, Gerrit; Barth, Alexander

    2016-04-01

    In the framework of Emodnet Chemistry lot, data products from regional leaders were made available in order to transform information into a database. This has been done by using functions and scripts, reading so-called enriched ODV files and inserting data directly into a cloud relational geodatabase. The main table is the one of observations which contains the main data and meta-data associated with the enriched ODV files. A particular implementation in data loading is used in order to improve on-the-fly computational speed. Data from Baltic Sea, North Sea, Mediterrean, Black Sea and part of the Atlantic region has been entered into the geodatabase, and consequently being instantly available from the OceanBrowser Emodnet portal. Furthermore, Deltares has developed an application that provides additional visualisation services for the aggregated and validated data collections. The visualisations are produced by making use of part of the OpenEarthTool stack (http://www.openearth.eu), by the integration of Web Feature Services and by the implementation of Web Processing Services. The goal is the generation of server-side plots of timeseries, profiles, timeprofiles and maps of selected parameters from data sets of selected stations. Regional data collections are retrieved using Emodnet Chemistry cloud relational geo-database. The spatial resolution in time and the intensity of data availability for selected parameters is shown using Web Service requests via the OceanBrowser Emodnet Web portal. OceanBrowser also shows station reference codes, which are used to establish a link for additional metadata, further data shopping and download.

  17. A synthetic dataset for evaluating soft and hard fusion algorithms

    NASA Astrophysics Data System (ADS)

    Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey

    2011-06-01

    There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.

  18. Habitat-Lite: A GSC case study based on free text terms for environmental metadata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyrpides, Nikos; Hirschman, Lynette; Clark, Cheryl

    2008-04-01

    There is an urgent need to capture metadata on the rapidly growing number of genomic, metagenomic and related sequences, such as 16S ribosomal genes. This need is a major focus within the Genomic Standards Consortium (GSC), and Habitat is a key metadata descriptor in the proposed 'Minimum Information about a Genome Sequence' (MIGS) specification. The goal of the work described here is to provide a light-weight, easy-to-use (small) set of terms ('Habitat-Lite') that captures high-level information about habitat while preserving a mapping to the recently launched Environment Ontology (EnvO). Our motivation for building Habitat-Lite is to meet the needs ofmore » multiple users, such as annotators curating these data, database providers hosting the data, and biologists and bioinformaticians alike who need to search and employ such data in comparative analyses. Here, we report a case study based on semi-automated identification of terms from GenBank and GOLD. We estimate that the terms in the initial version of Habitat-Lite would provide useful labels for over 60% of the kinds of information found in the GenBank isolation-source field, and around 85% of the terms in the GOLD habitat field. We present a revised version of Habitat-Lite and invite the community's feedback on its further development in order to provide a minimum list of terms to capture high-level habitat information and to provide classification bins needed for future studies.« less

  19. MEMOPS: data modelling and automatic code generation.

    PubMed

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  20. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  1. A Services-Oriented Architecture for Water Observations Data

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.; Tarboton, D. G.; Whitenack, T.; Whiteaker, T.; Hooper, R.; Kirschtel, D.

    2009-04-01

    Water observations data are time series of measurements made at point locations of water level, flow, and quality and corresponding data for climatic observations at point locations such as gaged precipitation and weather variables. A services-oriented architecture has been built for such information for the United States that has three components: hydrologic information servers, hydrologic information clients, and a centralized metadata cataloging system. These are connected using web services for observations data and metadata defined by an XML-based language called WaterML. A Hydrologic Information Server can be built by storing observations data in a relational database schema in the CUAHSI Observations Data Model, in which case, web services access to the data and metadata is automatically provided by query functions for WaterML that are wrapped around the relational database within a web server. A Hydrologic Information Server can also be constructed by custom-programming an interface to an existing water agency web site so that responds to the same queries by producing data in WaterML as do the CUAHSI Observations Data Model based servers. A Hydrologic Information Client is one which can interpret and ingest WaterML metadata and data. We have two client applications for Excel and ArcGIS and have shown how WaterML web services can be ingested into programming environments such as Matlab and Visual Basic. HIS Central, maintained at the San Diego Supercomputer Center is a repository of observational metadata for WaterML web services which presently indexes 342 million data measured at 1.75 million locations. This is the largest catalog water observational data for the United States presently in existence. As more observation networks join what we term "CUAHSI Water Data Federation", and the system accommodates a growing number of sites, measured parameters, applications, and users, rapid and reliable access to large heterogeneous hydrologic data repositories becomes critical. The CUAHSI HIS solution to the scalability and heterogeneity challenges has several components. Structural differences across the data repositories are addressed by building a standard services foundation for the exchange of hydrologic data, as derived from a common information model for observational data measured at stationary points and its implementation as a relational schema (ODM) and an XML schema (WaterML). Semantic heterogeneity is managed by mapping water quantity, water quality, and other parameters collected by government agencies and academic projects to a common ontology. The WaterML-compliant web services are indexed in a community services registry called HIS Central (hiscentral.cuahsi.org). Once a web service is registered in HIS Central, its metadata (site and variable characteristics, period of record for each variable at each site, etc.) is harvested and appended to the central catalog. The catalog is further updated as the service publisher associates the variables in the published service with ontology concepts. After this, the newly published service becomes available for spatial and semantics-based queries from online and desktop client applications developed by the project. Hydrologic system server software is now deployed at more than a dozen locations in the United States and Australia. To provide rapid access to data summaries, in particular for several nation-wide data repositories including EPA STORET, USGS NWIS, and USDA SNOTEL, we convert the observation data catalogs and databases with harvested data values into special representations that support high-performance analysis and visualization. The construction of OLAP (Online Analytical Processing) cubes, often called data cubes, is an approach to organizing and querying large multi-dimensional data collections. We have applied the OLAP techniques, as implemented in Microsoft SQL Server 2005/2008, to the analysis of the catalogs from several agencies. OLAP analysis results reflect geography and history of observation data availability from USGS NWIS, EPA STORET, and USDA SNOTEL repositories, and spatial and temporal dynamics of the available measurements for several key nutrient-related parameters. Our experience developing the CUAHSI HIS cyberinfrastructure demonstrated that efficient integration of hydrologic observations from multiple government and academic sources requires a range of technical approaches focused on managing different components of data heterogeneity and system scalability. While this submission addresses technical aspects of developing a national-scale information system for hydrologic observations, the challenges of explicating shared semantics of hydrologic observations and building a community of HIS users and developers remain critical in constructing a nation-wide federation of water data services.

  2. Preserving Geological Samples and Metadata from Polar Regions

    NASA Astrophysics Data System (ADS)

    Grunow, A.; Sjunneskog, C. M.

    2011-12-01

    The Office of Polar Programs at the National Science Foundation (NSF-OPP) has long recognized the value of preserving earth science collections due to the inherent logistical challenges and financial costs of collecting geological samples from Polar Regions. NSF-OPP established two national facilities to make Antarctic geological samples and drill cores openly and freely available for research. The Antarctic Marine Geology Research Facility (AMGRF) at Florida State University was established in 1963 and archives Antarctic marine sediment cores, dredge samples and smear slides along with ship logs. The United States Polar Rock Repository (USPRR) at Ohio State University was established in 2003 and archives polar rock samples, marine dredges, unconsolidated materials and terrestrial cores, along with associated materials such as field notes, maps, raw analytical data, paleomagnetic cores, thin sections, microfossil mounts, microslides and residues. The existence of the AMGRF and USPRR helps to minimize redundant sample collecting, lessen the environmental impact of doing polar field work, facilitates field logistics planning and complies with the data sharing requirement of the Antarctic Treaty. USPRR acquires collections through donations from institutions and scientists and then makes these samples available as no-cost loans for research, education and museum exhibits. The AMGRF acquires sediment cores from US based and international collaboration drilling projects in Antarctica. Destructive research techniques are allowed on the loaned samples and loan requests are accepted from any accredited scientific institution in the world. Currently, the USPRR has more than 22,000 cataloged rock samples available to scientists from around the world. All cataloged samples are relabeled with a USPRR number, weighed, photographed and measured for magnetic susceptibility. Many aspects of the sample metadata are included in the database, e.g. geographical location, sample description, collector, rock age, formation, section location, multimedia images as well structural data, field observations, logistics, surface features, etc. The metadata are entered into a commercial, museum based database called EMu. The AMGRF houses more than 25,000m of deep-sea cores and drill cores as well as nearly 3,000 meters of rotary cored geological material from Antarctica. Detailed information on the sediment cores including location, sediment composition are available in cruise reports posted on the AMGRF web-site. Researchers may access the sample collections through the online websites (http://www-bprc.mps.ohio-state.edu/emuwebusprr and http://www.arf.fsu.edu). Searches may be done using multiple search terms or by use of the mapping feature. The on-line databases provide an essential resource for proposal preparation, pilot studies and other sample based research that should make fieldwork more efficient.

  3. Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) - A New U.S. DOE Data Archive

    NASA Astrophysics Data System (ADS)

    Agarwal, D.; Varadharajan, C.; Cholia, S.; Snavely, C.; Hendrix, V.; Gunter, D.; Riley, W. J.; Jones, M.; Budden, A. E.; Vieglais, D.

    2017-12-01

    The ESS-DIVE archive is a new U.S. Department of Energy (DOE) data archive designed to provide long-term stewardship and use of data from observational, experimental, and modeling activities in the earth and environmental sciences. The ESS-DIVE infrastructure is constructed with the long-term vision of enabling broad access to and usage of the DOE sponsored data stored in the archive. It is designed as a scalable framework that incentivizes data providers to contribute well-structured, high-quality data to the archive and that enables the user community to easily build data processing, synthesis, and analysis capabilities using those data. The key innovations in our design include: (1) application of user-experience research methods to understand the needs of users and data contributors; (2) support for early data archiving during project data QA/QC and before public release; (3) focus on implementation of data standards in collaboration with the community; (4) support for community built tools for data search, interpretation, analysis, and visualization tools; (5) data fusion database to support search of the data extracted from packages submitted and data available in partner data systems such as the Earth System Grid Federation (ESGF) and DataONE; and (6) support for archiving of data packages that are not to be released to the public. ESS-DIVE data contributors will be able to archive and version their data and metadata, obtain data DOIs, search for and access ESS data and metadata via web and programmatic portals, and provide data and metadata in standardized forms. The ESS-DIVE archive and catalog will be federated with other existing catalogs, allowing cross-catalog metadata search and data exchange with existing systems, including DataONE's Metacat search. ESS-DIVE is operated by a multidisciplinary team from Berkeley Lab, the National Center for Ecological Analysis and Synthesis (NCEAS), and DataONE. The primarily data copies are hosted at DOE's NERSC supercomputing facility with replicas at DataONE nodes.

  4. Academic Research Library as Broker in Addressing Interoperability Challenges for the Geosciences

    NASA Astrophysics Data System (ADS)

    Smith, P., II

    2015-12-01

    Data capture is an important process in the research lifecycle. Complete descriptive and representative information of the data or database is necessary during data collection whether in the field or in the research lab. The National Science Foundation's (NSF) Public Access Plan (2015) mandates the need for federally funded projects to make their research data more openly available. Developing, implementing, and integrating metadata workflows into to the research process of the data lifecycle facilitates improved data access while also addressing interoperability challenges for the geosciences such as data description and representation. Lack of metadata or data curation can contribute to (1) semantic, (2) ontology, and (3) data integration issues within and across disciplinary domains and projects. Some researchers of EarthCube funded projects have identified these issues as gaps. These gaps can contribute to interoperability data access, discovery, and integration issues between domain-specific and general data repositories. Academic Research Libraries have expertise in providing long-term discovery and access through the use of metadata standards and provision of access to research data, datasets, and publications via institutional repositories. Metadata crosswalks, open archival information systems (OAIS), trusted-repositories, data seal of approval, persistent URL, linking data, objects, resources, and publications in institutional repositories and digital content management systems are common components in the library discipline. These components contribute to a library perspective on data access and discovery that can benefit the geosciences. The USGS Community for Data Integration (CDI) has developed the Science Support Framework (SSF) for data management and integration within its community of practice for contribution to improved understanding of the Earth's physical and biological systems. The USGS CDI SSF can be used as a reference model to map to EarthCube Funded projects with academic research libraries facilitating the data and information assets components of the USGS CDI SSF via institutional repositories and/or digital content management. This session will explore the USGS CDI SSF for cross-discipline collaboration considerations from a library perspective.

  5. University of TX Bureau of Economic Geology's Core Research Centers: The Time is Right for Registering Physical Samples and Assigning IGSN's - Workflows, Stumbling Blocks, and Successes.

    NASA Astrophysics Data System (ADS)

    Averett, A.; DeJarnett, B. B.

    2016-12-01

    The University Of Texas Bureau Of Economic Geology (BEG) serves as the geological survey for Texas and operates three geological sample repositories that house well over 2 million boxes of geological samples (cores and cuttings) and an abundant amount of geoscience data (geophysical logs, thin sections, geochemical analyses, etc.). Material is accessible and searchable online, and it is publically available to the geological community for research and education. Patrons access information about our collection by using our online core and log database (SQL format). BEG is currently undertaking a large project to: 1) improve the internal accuracy of metadata associated with the collection; 2) enhance the capabilities of the database for both BEG curators and researchers as well as our external patrons; and 3) ensure easy and efficient navigation for patrons through our online portal. As BEG undertakes this project, BEG is in the early stages of planning to export the metadata for its collection into SESAR (System for Earth Sample Registration) and have IGSN's (International GeoSample Numbers) assigned to its samples. Education regarding the value of IGSN's and an external registry (SESAR) has been crucial to receiving management support for the project because the concept and potential benefits of registering samples in a registry outside of the institution were not well-known prior to this project. Potential benefits such as increases in discoverability, repository recognition in publications, and interoperability were presented. The project was well-received by management, and BEG fully supports the effort to register our physical samples with SESAR. Since BEG is only in the initial phase of this project, any stumbling blocks, workflow issues, successes/failures, etc. can only be predicted at this point, but by mid-December, BEG expects to have several concrete issues to present in the session. Currently, our most pressing issue involves establishing the most efficient workflow for exporting of large amounts of metadata in a format that SESAR can easily ingest, and how this can be best accomplished with very few BEG staff assigned to the project.

  6. Integration of Web-based and PC-based clinical research databases.

    PubMed

    Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M

    2004-01-01

    We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.

  7. Building a Database for a Quantitative Model

    NASA Technical Reports Server (NTRS)

    Kahn, C. Joseph; Kleinhammer, Roger

    2014-01-01

    A database can greatly benefit a quantitative analysis. The defining characteristic of a quantitative risk, or reliability, model is the use of failure estimate data. Models can easily contain a thousand Basic Events, relying on hundreds of individual data sources. Obviously, entering so much data by hand will eventually lead to errors. Not so obviously entering data this way does not aid linking the Basic Events to the data sources. The best way to organize large amounts of data on a computer is with a database. But a model does not require a large, enterprise-level database with dedicated developers and administrators. A database built in Excel can be quite sufficient. A simple spreadsheet database can link every Basic Event to the individual data source selected for them. This database can also contain the manipulations appropriate for how the data is used in the model. These manipulations include stressing factors based on use and maintenance cycles, dormancy, unique failure modes, the modeling of multiple items as a single "Super component" Basic Event, and Bayesian Updating based on flight and testing experience. A simple, unique metadata field in both the model and database provides a link from any Basic Event in the model to its data source and all relevant calculations. The credibility for the entire model often rests on the credibility and traceability of the data.

  8. Viral taxonomy needs a spring clean; its exploration era is over.

    PubMed

    Gibbs, Adrian J

    2013-08-09

    The International Committee on Taxonomy of Viruses has recently changed its approved definition of a viral species, and also discontinued work on its database of virus descriptions. These events indicate that the exploration era of viral taxonomy has ended; over the past century the principles of viral taxonomy have been established, the tools for phylogenetic inference invented, and the ultimate discriminatory data required for taxonomy, namely gene sequences, are now readily available. Further changes would make viral taxonomy more informative. First, the status of a 'taxonomic species' with an italicized name should only be given to viruses that are specifically linked with a single 'type genomic sequence' like those in the NCBI Reference Sequence Database. Secondly all approved taxa should be predominately monophyletic, and uninformative higher taxa disendorsed. These are 'quality assurance' measures and would improve the value of viral nomenclature to its users. The ICTV should also promote the use of a public database, such as Wikipedia, to replace the ICTV database as a store of the primary metadata of individual viruses, and should publish abstracts of the ICTV Reports in that database, so that they are 'Open Access'.

  9. A DBMS architecture for global change research

    NASA Astrophysics Data System (ADS)

    Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.

    1993-08-01

    The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.

  10. Documentation Resources on the ESIP Wiki

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Kozimor, John; Gordon, Sean

    2017-01-01

    The ESIP community includes data providers and users that communicate with one another through datasets and metadata that describe them. Improving this communication depends on consistent high-quality metadata. The ESIP Documentation Cluster and the wiki play an important central role in facilitating this communication. We will describe and demonstrate sections of the wiki that provide information about metadata concept definitions, metadata recommendation, metadata dialects, and guidance pages. We will also describe and demonstrate the ISO Explorer, a tool that the community is developing to help metadata creators.

  11. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  12. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  13. From the inside-out: Retrospectives on a metadata improvement process to advance the discoverability of NASÁs earth science data

    NASA Astrophysics Data System (ADS)

    Hernández, B. E.; Bugbee, K.; le Roux, J.; Beaty, T.; Hansen, M.; Staton, P.; Sisco, A. W.

    2017-12-01

    Earth observation (EO) data collected as part of NASA's Earth Observing System Data and Information System (EOSDIS) is now searchable via the Common Metadata Repository (CMR). The Analysis and Review of CMR (ARC) Team at Marshall Space Flight Center has been tasked with reviewing all NASA metadata records in the CMR ( 7,000 records). Each collection level record and constituent granule level metadata are reviewed for both completeness as well as compliance with the CMR's set of metadata standards, as specified in the Unified Metadata Model (UMM). NASA's Distributed Active Archive Centers (DAACs) have been harmonizing priority metadata records within the context of the inter-agency federal Big Earth Data Initiative (BEDI), which seeks to improve the discoverability, accessibility, and usability of EO data. Thus, the first phase of this project constitutes reviewing BEDI metadata records, while the second phase will constitute reviewing the remaining non-BEDI records in CMR. This presentation will discuss the ARC team's findings in terms of the overall quality of BEDI records across all DAACs as well as compliance with UMM standards. For instance, only a fifth of the collection-level metadata fields needed correction, compared to a quarter of the granule-level fields. It should be noted that the degree to which DAACs' metadata did not comply with the UMM standards may reflect multiple factors, such as recent changes in the UMM standards, and the utilization of different metadata formats (e.g. DIF 10, ECHO 10, ISO 19115-1) across the DAACs. Insights, constructive criticism, and lessons learned from this metadata review process will be contributed from both ORNL and SEDAC. Further inquiry along such lines may lead to insights which may improve the metadata curation process moving forward. In terms of the broader implications for metadata compliance with the UMM standards, this research has shown that a large proportion of the prioritized collections have already been made compliant, although the process of improving metadata quality is ongoing and iterative. Further research is also warranted into whether or not the gains in metadata quality are also driving gains in data use.

  14. Forum Guide to Metadata: The Meaning behind Education Data. NFES 2009-805

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2009

    2009-01-01

    The purpose of this guide is to empower people to more effectively use data as information. To accomplish this, the publication explains what metadata are; why metadata are critical to the development of sound education data systems; what components comprise a metadata system; what value metadata bring to data management and use; and how to…

  15. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    ERIC Educational Resources Information Center

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  16. A novel framework for assessing metadata quality in epidemiological and public health research settings

    PubMed Central

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly. PMID:27570670

  17. A novel framework for assessing metadata quality in epidemiological and public health research settings.

    PubMed

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly.

  18. Adapting the CUAHSI Hydrologic Information System to OGC standards

    NASA Astrophysics Data System (ADS)

    Valentine, D. W.; Whitenack, T.; Zaslavsky, I.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) provides web and desktop client access to hydrologic observations via water data web services using an XML schema called “WaterML”. The WaterML 1.x specification and the corresponding Water Data Services have been the backbone of the HIS service-oriented architecture (SOA) and have been adopted for serving hydrologic data by several federal agencies and many academic groups. The central discovery service, HIS Central, is based on an metadata catalog that references 4.7 billion observations, organized as 23 million data series from 1.5 million sites from 51 organizations. Observations data are published using HydroServer nodes that have been deployed at 18 organizations. Usage of HIS has increased by 8x from 2008 to 2010, and doubled in usage from 1600 data series a day in 2009 to 3600 data series a day in the first half of 2010. The HIS central metadata catalog currently harvests information from 56 Water Data Services. We collaborate on the catalog updates with two federal partners, USGS and US EPA: their data series are periodically reloaded into the HIS metadata catalog. We are pursuing two main development directions in the HIS project: Cloud-based computing, and further compliance with Open Geospatial Consortium (OGC) standards. The goal of moving to cloud-computing is to provide a scalable collaborative system with a simpler deployment and less dependence of hardware maintenance and staff. This move requires re-architecting the information models underlying the metadata catalog, and Water Data Services to be independent of the underlying relational database model, allowing for implementation on both relational databases, and cloud-based processing systems. Cloud-based HIS central resources can be managed collaboratively; partners share responsibility for their metadata by publishing data series information into the centralized catalog. Publishing data series will use REST-based service interfaces, like OData, as the basis for ingesting data series information into a cloud-hosted catalog. The future HIS services involve providing information via OGC Standards that will allow for observational data access from commercial GIS applications. Use of standards will allow for tools to access observational data from other projects using standards, such as the Ocean Observatories Initiative, and for tools from such projects to be integrated into the HIS toolset. With international collaborators, we have been developing a water information exchange language called “WaterML 2.0” which will be used to deliver observations data over OGC Sensor Observation Services (SOS). A software stack of OGC standard services will provide access to HIS information. In addition to SOS, Web Mapping and Feature Services (WMS, and WFS) will provide access to location information. Catalog Services for the Web (CSW) will provide a catalog for water information that is both centralized, and distributed. We intend the OGC standards supplement the existing HIS service interfaces, rather than replace the present service interfaces. The ultimate goal of this development is expand access to hydrologic observations data, and create an environment where these data can be seamlessly integrated with standards-compliant data resources.

  19. The interoperability skill of the Geographic Portal of the ISPRA - Geological Survey of Italy

    NASA Astrophysics Data System (ADS)

    Pia Congi, Maria; Campo, Valentina; Cipolloni, Carlo; Delogu, Daniela; Ventura, Renato; Battaglini, Loredana

    2010-05-01

    The Geographic Portal of Geological Survey of Italy (ISPRA) available at http://serviziogeologico.apat.it/Portal was planning according to standard criteria of the INSPIRE directive. ArcIMS services and at the same time WMS and WFS services had been realized to satisfy the different clients. For each database and web-services the metadata had been wrote in agreement with the ISO 19115. The management architecture of the portal allow it to encode the clients input and output requests both in ArcXML and in GML language. The web-applications and web-services had been realized for each database owner of Land Protection and Georesources Department concerning the geological map at the scale 1:50.000 (CARG Project) and 1:100.000, the IFFI landslide inventory, the boreholes due Law 464/84, the large-scale geological map and all the raster format maps. The portal thus far published is at the experimental stage but through the development of a new graphical interface achieves the final version. The WMS and WFS services including metadata will be re-designed. The validity of the methodology and the applied standards allow to look ahead to the growing developments. In addition to this it must be borne in mind that the capacity of the new geological standard language (GeoSciML), which is already incorporated in the web-services deployed, will be allow a better display and query of the geological data according to the interoperability. The characteristics of the geological data demand for the cartographic mapping specific libraries of symbols not yet available in a WMS service. This is an other aspect regards the standards of the geological informations. Therefore at the moment were carried out: - a library of geological symbols to be used for printing, with a sketch of system colors and a library for displaying data on video, which almost completely solves the problems of the coverage point and area data (also directed) but that still introduces problems for the linear data (solutions: ArcIMS services from Arcmap projects or a specific SLD implementation for WMS services); - an update of "Guidelines for the supply of geological data" in a short time will be published; - the Geological Survey of Italy is officially involved in the IUGS-CGI working group for the processing and experimentation on the new GeoSciML language with the WMS/WFS services. The availability of geographic informations occurs through the metadata that can be distributed online so that search engines can find them through specialized research. The collected metadata in catalogs are structured in a standard (ISO 19135). The catalogs are a ‘common' interface to locate, view and query data and metadata services, web services and other resources. Then, while working in a growing sector of the environmental knowledgement the focus is to collect the participation of other subjects that contribute to the enrichment of the informative content available, so as to be able to arrive to a real portal of national interest especially in case of disaster management.

  20. Assembly: a resource for assembled genomes at NCBI

    PubMed Central

    Kitts, Paul A.; Church, Deanna M.; Thibaud-Nissen, Françoise; Choi, Jinna; Hem, Vichet; Sapojnikov, Victor; Smith, Robert G.; Tatusova, Tatiana; Xiang, Charlie; Zherikov, Andrey; DiCuccio, Michael; Murphy, Terence D.; Pruitt, Kim D.; Kimchi, Avi

    2016-01-01

    The NCBI Assembly database (www.ncbi.nlm.nih.gov/assembly/) provides stable accessioning and data tracking for genome assembly data. The model underlying the database can accommodate a range of assembly structures, including sets of unordered contig or scaffold sequences, bacterial genomes consisting of a single complete chromosome, or complex structures such as a human genome with modeled allelic variation. The database provides an assembly accession and version to unambiguously identify the set of sequences that make up a particular version of an assembly, and tracks changes to updated genome assemblies. The Assembly database reports metadata such as assembly names, simple statistical reports of the assembly (number of contigs and scaffolds, contiguity metrics such as contig N50, total sequence length and total gap length) as well as the assembly update history. The Assembly database also tracks the relationship between an assembly submitted to the International Nucleotide Sequence Database Consortium (INSDC) and the assembly represented in the NCBI RefSeq project. Users can find assemblies of interest by querying the Assembly Resource directly or by browsing available assemblies for a particular organism. Links in the Assembly Resource allow users to easily download sequence and annotations for current versions of genome assemblies from the NCBI genomes FTP site. PMID:26578580

  1. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  2. Towards Data Value-Level Metadata for Clinical Studies.

    PubMed

    Zozus, Meredith Nahm; Bonner, Joseph

    2017-01-01

    While several standards for metadata describing clinical studies exist, comprehensive metadata to support traceability of data from clinical studies has not been articulated. We examine uses of metadata in clinical studies. We examine and enumerate seven sources of data value-level metadata in clinical studies inclusive of research designs across the spectrum of the National Institutes of Health definition of clinical research. The sources of metadata inform categorization in terms of metadata describing the origin of a data value, the definition of a data value, and operations to which the data value was subjected. The latter is further categorized into information about changes to a data value, movement of a data value, retrieval of a data value, and data quality checks, constraints or assessments to which the data value was subjected. The implications of tracking and managing data value-level metadata are explored.

  3. Image BOSS: a biomedical object storage system

    NASA Astrophysics Data System (ADS)

    Stacy, Mahlon C.; Augustine, Kurt E.; Robb, Richard A.

    1997-05-01

    Researchers using biomedical images have data management needs which are oriented perpendicular to clinical PACS. The image BOSS system is designed to permit researchers to organize and select images based on research topic, image metadata, and a thumbnail of the image. Image information is captured from existing images in a Unix based filesystem, stored in an object oriented database, and presented to the user in a familiar laboratory notebook metaphor. In addition, the ImageBOSS is designed to provide an extensible infrastructure for future content-based queries directly on the images.

  4. Structure constrained by metadata in networks of chess players.

    PubMed

    Almeira, Nahuel; Schaigorodsky, Ana L; Perotti, Juan I; Billoni, Orlando V

    2017-11-09

    Chess is an emblematic sport that stands out because of its age, popularity and complexity. It has served to study human behavior from the perspective of a wide number of disciplines, from cognitive skills such as memory and learning, to aspects like innovation and decision-making. Given that an extensive documentation of chess games played throughout history is available, it is possible to perform detailed and statistically significant studies about this sport. Here we use one of the most extensive chess databases in the world to construct two networks of chess players. One of the networks includes games that were played over-the-board and the other contains games played on the Internet. We study the main topological characteristics of the networks, such as degree distribution and correlations, transitivity and community structure. We complement the structural analysis by incorporating players' level of play as node metadata. Although both networks are topologically different, we show that in both cases players gather in communities according to their expertise and that an emergent rich-club structure, composed by the top-rated players, is also present.

  5. František Nábělek's Iter Turcico-Persicum 1909-1910 - database and digitized herbarium collection.

    PubMed

    Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol

    2016-01-01

    The Czech botanist František Nábělek (1884-1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923-1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata.

  6. Quantifying auditory temporal stability in a large database of recorded music.

    PubMed

    Ellis, Robert J; Duan, Zhiyan; Wang, Ye

    2014-01-01

    "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature"), none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS)--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  7. A data discovery index for the social sciences

    PubMed Central

    Krämer, Thomas; Klas, Claus-Peter; Hausstein, Brigitte

    2018-01-01

    This paper describes a novel search index for social and economic research data, one that enables users to search up-to-date references for data holdings in these disciplines. The index can be used for comparative analysis of publication of datasets in different areas of social science. The core of the index is the da|ra registration agency’s database for social and economic data, which contains high-quality searchable metadata from registered data publishers. Research data’s metadata records are harvested from data providers around the world and included in the index. In this paper, we describe the currently available indices on social science datasets and their shortcomings. Next, we describe the motivation behind and the purpose for the data discovery index as a dedicated and curated platform for finding social science research data and gesisDataSearch, its user interface. Further, we explain the harvesting, filtering and indexing procedure and give usage instructions for the dataset index. Lastly, we show that the index is currently the most comprehensive and most accessible collection of social science data descriptions available. PMID:29633988

  8. Managing an Archive of Images

    NASA Technical Reports Server (NTRS)

    Andres, Vince; Walter, David; Hallal, Charles; Jones, Helene; Callac, Chris

    2004-01-01

    The SSC Multimedia Archive is an automated electronic system to manage images, acquired both by film and digital cameras, for the Public Affairs Office (PAO) at Stennis Space Center (SSC). Previously, the image archive was based on film photography and utilized a manual system that, by today s standards, had become inefficient and expensive. Now, the SSC Multimedia Archive, based on a server at SSC, contains both catalogs and images for pictures taken both digitally and with a traditional, film-based camera, along with metadata about each image. After a "shoot," a photographer downloads the images into the database. Members of the PAO can use a Web-based application to search, view and retrieve images, approve images for publication, and view and edit metadata associated with the images. Approved images are archived and cross-referenced with appropriate descriptions and information. Security is provided by allowing administrators to explicitly grant access privileges to personnel to only access components of the system that they need to (i.e., allow only photographers to upload images, only PAO designated employees may approve images).

  9. František Nábělek’s Iter Turcico-Persicum 1909–1910 – database and digitized herbarium collection

    PubMed Central

    Kempa, Matúš; Edmondson, John; Lack, Hans Walter; Smatanová, Janka; Marhold, Karol

    2016-01-01

    Abstract The Czech botanist František Nábělek (1884−1965) explored the Middle East in 1909-1910, visiting what are now Israel, Palestine, Jordan, Syria, Lebanon, Iraq, Bahrain, Iran and Turkey. He described four new genera, 78 species, 69 varieties and 38 forms of vascular plants, most of these in his work Iter Turcico-Persicum (1923−1929). The main herbarium collection of Iter Turcico-Persicum comprises 4163 collection numbers (some with duplicates), altogether 6465 specimens. It is currently deposited in the herbarium SAV. In addition, some fragments and duplicates are found in B, E, W and WU. The whole collection at SAV was recently digitized and both images and metadata are available via web portal www.nabelek.sav.sk, and through JSTOR Global Plants and the Biological Collection Access Service. Most localities were georeferenced and the web portal provides a mapping facility. Annotation of specimens is available via the AnnoSys facility. For each specimen a CETAF stable identifier is provided enabling the correct reference to the image and metadata. PMID:28127245

  10. Managing Complex Change in Clinical Study Metadata

    PubMed Central

    Brandt, Cynthia A.; Gadagkar, Rohit; Rodriguez, Cesar; Nadkarni, Prakash M.

    2004-01-01

    In highly functional metadata-driven software, the interrelationships within the metadata become complex, and maintenance becomes challenging. We describe an approach to metadata management that uses a knowledge-base subschema to store centralized information about metadata dependencies and use cases involving specific types of metadata modification. Our system borrows ideas from production-rule systems in that some of this information is a high-level specification that is interpreted and executed dynamically by a middleware engine. Our approach is implemented in TrialDB, a generic clinical study data management system. We review approaches that have been used for metadata management in other contexts and describe the features, capabilities, and limitations of our system. PMID:15187070

  11. Collaborative Data Publication Utilizing the Open Data Repository's (ODR) Data Publisher

    NASA Technical Reports Server (NTRS)

    Stone, N.; Lafuente, B.; Bristow, T.; Keller, R. M.; Downs, R. T.; Blake, D.; Fonda, M.; Dateo, C.; Pires, A.

    2017-01-01

    Introduction: For small communities in diverse fields such as astrobiology, publishing and sharing data can be a difficult challenge. While large, homogenous fields often have repositories and existing data standards, small groups of independent researchers have few options for publishing standards and data that can be utilized within their community. In conjunction with teams at NASA Ames and the University of Arizona, the Open Data Repository's (ODR) Data Publisher has been conducting ongoing pilots to assess the needs of diverse research groups and to develop software to allow them to publish and share their data collaboratively. Objectives: The ODR's Data Publisher aims to provide an easy-to-use and implement software tool that will allow researchers to create and publish database templates and related data. The end product will facilitate both human-readable interfaces (web-based with embedded images, files, and charts) and machine-readable interfaces utilizing semantic standards. Characteristics: The Data Publisher software runs on the standard LAMP (Linux, Apache, MySQL, PHP) stack to provide the widest server base available. The software is based on Symfony (www.symfony.com) which provides a robust framework for creating extensible, object-oriented software in PHP. The software interface consists of a template designer where individual or master database templates can be created. A master database template can be shared by many researchers to provide a common metadata standard that will set a compatibility standard for all derivative databases. Individual researchers can then extend their instance of the template with custom fields, file storage, or visualizations that may be unique to their studies. This allows groups to create compatible databases for data discovery and sharing purposes while still providing the flexibility needed to meet the needs of scientists in rapidly evolving areas of research. Research: As part of this effort, a number of ongoing pilot and test projects are currently in progress. The Astrobiology Habitable Environments Database Working Group is developing a shared database standard using the ODR's Data Publisher and has a number of example databases where astrobiology data are shared. Soon these databases will be integrated via the template-based standard. Work with this group helps determine what data researchers in these diverse fields need to share and archive. Additionally, this pilot helps determine what standards are viable for sharing these types of data from internally developed standards to existing open standards such as the Dublin Core (http://dublincore.org) and Darwin Core (http://rs.twdg.org) metadata standards. Further studies are ongoing with the University of Arizona Department of Geosciences where a number of mineralogy databases are being constructed within the ODR Data Publisher system. Conclusions: Through the ongoing pilots and discussions with individual researchers and small research teams, a definition of the tools desired by these groups is coming into focus. As the software development moves forward, the goal is to meet the publication and collaboration needs of these scientists in an unobtrusive and functional way.

  12. Metazen – metadata capture for metagenomes

    PubMed Central

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  13. Metazen - metadata capture for metagenomes.

    PubMed

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-01-01

    As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  14. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    NASA Astrophysics Data System (ADS)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  15. CMR Metadata Curation

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Bugbee, Kaylin

    2017-01-01

    This talk explains the ongoing metadata curation activities in the Common Metadata Repository. It explores tools that exist today which are useful for building quality metadata and also opens up the floor for discussions on other potentially useful tools.

  16. First results of MAO NASU SS bodies photographic archive digitizing

    NASA Astrophysics Data System (ADS)

    Pakuliak, L.; Andruk, V.; Shatokhina, S.; Golovnya, V.; Yizhakevych, O.; Kulyk, I.

    2013-05-01

    MAO NASU glass archive encloses about 1800 photographic plates with planets and their satellites (including near 80 images of Uranus, Pluto and Neptune), about 1700 plates with minor planets and about 900 plates with comets. Plates were made during 1949-1999 using 11 telescopes of different focus, mostly the Double Wide-angle Astrograph (F/D=2000/400) and the Double Long-focus Astrograph (F/D=5500/400) of MAO NASU. Observational sites are Kyiv, Lviv (Ukraine), Biurakan (Armenia), Abastumani (Georgia), Mt. Maidanak (Uzbekistan), Quito (Equador). Tables contain data about the most significant numbers of plates sub-divided by years and objects. The database with metadata of plates (DBGPA) is available on the computer cluster of MAO (http://gua.db.ukr-vo.org) via open access. The database accumulates archives of four Ukrainian observatories, involving the UkrVO national project. Together with the archive managing system, the database serves as a test area for JDA - Joint Digital Archive - the core of the UkrVO.

  17. MetaboLights: An Open-Access Database Repository for Metabolomics Data.

    PubMed

    Kale, Namrata S; Haug, Kenneth; Conesa, Pablo; Jayseelan, Kalaivani; Moreno, Pablo; Rocca-Serra, Philippe; Nainala, Venkata Chandrasekhar; Spicer, Rachel A; Williams, Mark; Li, Xuefei; Salek, Reza M; Griffin, Julian L; Steinbeck, Christoph

    2016-03-24

    MetaboLights is the first general purpose, open-access database repository for cross-platform and cross-species metabolomics research at the European Bioinformatics Institute (EMBL-EBI). Based upon the open-source ISA framework, MetaboLights provides Metabolomics Standard Initiative (MSI) compliant metadata and raw experimental data associated with metabolomics experiments. Users can upload their study datasets into the MetaboLights Repository. These studies are then automatically assigned a stable and unique identifier (e.g., MTBLS1) that can be used for publication reference. The MetaboLights Reference Layer associates metabolites with metabolomics studies in the archive and is extensively annotated with data fields such as structural and chemical information, NMR and MS spectra, target species, metabolic pathways, and reactions. The database is manually curated with no specific release schedules. MetaboLights is also recommended by journals for metabolomics data deposition. This unit provides a guide to using MetaboLights, downloading experimental data, and depositing metabolomics datasets using user-friendly submission tools. Copyright © 2016 John Wiley & Sons, Inc.

  18. The Gene Expression Omnibus Database.

    PubMed

    Clough, Emily; Barrett, Tanya

    2016-01-01

    The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome-protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/.

  19. The Gene Expression Omnibus database

    PubMed Central

    Clough, Emily; Barrett, Tanya

    2016-01-01

    The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome–protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/. PMID:27008011

  20. Our journey to digital curation of the Jeghers Medical Index.

    PubMed

    Gawdyda, Lori; Carter, Kimbroe; Willson, Mark; Bedford, Denise

    2017-07-01

    Harold Jeghers, a well-known medical educator of the twentieth century, maintained a print collection of about one million medical articles from the late 1800s to the 1990s. This case study discusses how a print collection of these articles was transformed to a digital database. Staff in the Jeghers Medical Index, St. Elizabeth Youngstown Hospital, converted paper articles to Adobe portable document format (PDF)/A-1a files. Optical character recognition was used to obtain searchable text. The data were then incorporated into a specialized database. Lastly, articles were matched to PubMed bibliographic metadata through automation and human review. An online database of the collection was ultimately created. The collection was made part of a discovery search service, and semantic technologies have been explored as a method of creating access points. This case study shows how a small medical library made medical writings of the nineteenth and twentieth centuries available in electronic format for historic or semantic research, highlighting the efficiencies of contemporary information technology.

  1. Implementing a Community-Driven Cyberinfrastructure Platform for the Paleo- and Rock Magnetic Scientific Fields that Generalizes to Other Geoscience Disciplines

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Jarboe, N.; Koppers, A. A.; Tauxe, L.; Constable, C.

    2013-12-01

    EarthRef.org is a geoscience umbrella website for several databases and data and model repository portals. These portals, unified in the mandate to preserve their respective data and promote scientific collaboration in their fields, are also disparate in their schemata. The Magnetics Information Consortium (http://earthref.org/MagIC/) is a grass-roots cyberinfrastructure effort envisioned by the paleo- and rock magnetic scientific community to archive their wealth of peer-reviewed raw data and interpretations from studies on natural and synthetic samples and relies on a partially strict subsumptive hierarchical data model. The Geochemical Earth Reference Model (http://earthref.org/GERM/) portal focuses on the chemical characterization of the Earth and relies on two data schemata: a repository of peer-reviewed reservoir geochemistry, and a database of partition coefficients for rocks, minerals, and elements. The Seamount Biogeosciences Network (http://earthref.org/SBN/) encourages the collaboration between the diverse disciplines involved in seamount research and includes the Seamount Catalog (http://earthref.org/SC/) of bathymetry and morphology. All of these portals also depend on the EarthRef Reference Database (http://earthref.org/ERR/) for publication reference metadata and the EarthRef Digital Archive (http://earthref.org/ERDA/), a generic repository of data objects and their metadata. The development of the new MagIC Search Interface (http://earthref.org/MagIC/search/) centers on a reusable platform designed to be flexible enough for largely heterogeneous datasets and to scale up to datasets with tens of millions of records. The HTML5 web application and Oracle 11g database residing at the San Diego Supercomputer Center (SDSC) support the online contribution and editing of complex datasets in a spreadsheet environment and the browsing and filtering of these contributions in the context of thousands of other datasets. EarthRef.org is in the process of implementing this platform across all of its data portals in spite of the wide variety of data schemata and is dedicated to serving the geoscience community with as little effort from the end-users as possible.

  2. Regulations in the field of Geo-Information

    NASA Astrophysics Data System (ADS)

    Felus, Y.; Keinan, E.; Regev, R.

    2013-10-01

    The geomatics profession has gone through a major revolution during the last two decades with the emergence of advanced GNSS, GIS and Remote Sensing technologies. These technologies have changed the core principles and working procedures of geomatics professionals. For this reason, surveying and mapping regulations, standards and specifications should be updated to reflect these changes. In Israel, the "Survey Regulations" is the principal document that regulates the professional activities in four key areas geodetic control, mapping, cadastre and Georaphic information systems. Licensed Surveyors and mapping professionals in Israel are required to work according to those regulations. This year a new set of regulations have been published and include a few major amendments as follows: In the Geodesy chapter, horizontal control is officially based on the Israeli network of Continuously Operating GNSS Reference Stations (CORS). The regulations were phrased in a manner that will allow minor datum changes to the CORS stations due to Earth Crustal Movements. Moreover, the regulations permit the use of GNSS for low accuracy height measurements. In the Cadastre chapter, the most critical change is the move to Coordinate Based Cadastre (CBC). Each parcel corner point is ranked according to its quality (accuracy and clarity of definition). The highest ranking for a parcel corner is 1. A point with a rank of 1 is defined by its coordinates alone. Any other contradicting evidence is inferior to the coordinates values. Cadastral Information is stored and managed via the National Cadastral Databases. In the Mapping and GIS chapter; the traditional paper maps (ranked by scale) are replaced by digital maps or spatial databases. These spatial databases are ranked by their quality level. Quality level is determined (similar to the ISO19157 Standard) by logical consistency, completeness, positional accuracy, attribute accuracy, temporal accuracy and usability. Metadata is another critical component of any spatial database. Every component in a map should have a metadata identification, even if the map was compiled from multiple resources. The regulations permit the use of advanced sensors and mapping techniques including LIDAR and digita l cameras that have been certified and meet the defined criteria. The article reviews these new regulations and the decision that led to them.

  3. Geospatial data infrastructure: The development of metadata for geo-information in China

    NASA Astrophysics Data System (ADS)

    Xu, Baiquan; Yan, Shiqiang; Wang, Qianju; Lian, Jian; Wu, Xiaoping; Ding, Keyong

    2014-03-01

    Stores of geoscience records are in constant flux. These stores are continually added to by new information, ideas and data, which are frequently revised. The geoscience record is in restrained by human thought and technology for handling information. Conventional methods strive, with limited success, to maintain geoscience records which are readily susceptible and renewable. The information system must adapt to the diversity of ideas and data in geoscience and their changes through time. In China, more than 400,000 types of important geological data are collected and produced in geological work during the last two decades, including oil, natural gas and marine data, mine exploration, geophysical, geochemical, remote sensing and important local geological survey and research reports. Numerous geospatial databases are formed and stored in National Geological Archives (NGA) with available formats of MapGIS, ArcGIS, ArcINFO, Metalfile, Raster, SQL Server, Access and JPEG. But there is no effective way to warrant that the quality of information is adequate in theory and practice for decision making. The need for fast, reliable, accurate and up-to-date information by providing the Geographic Information System (GIS) communities are becoming insistent for all geoinformation producers and users in China. Since 2010, a series of geoinformation projects have been carried out under the leadership of the Ministry of Land and Resources (MLR), including (1) Integration, update and maintenance of geoinformation databases; (2) Standards research on clusterization and industrialization of information services; (3) Platform construction of geological data sharing; (4) Construction of key borehole databases; (5) Product development of information services. "Nine-System" of the basic framework has been proposed for the development and improvement of the geospatial data infrastructure, which are focused on the construction of the cluster organization, cluster service, convergence, database, product, policy, technology, standard and infrastructure systems. The development of geoinformation stores and services put forward a need for Geospatial Data Infrastructure (GDI) in China. In this paper, some of the ideas envisaged into the development of metadata in China are discussed.

  4. Exposing USGS sample collections for broader discovery and access: collaboration between ScienceBase, IEDA:SESAR, and Paleobiology Database

    NASA Astrophysics Data System (ADS)

    Hsu, L.; Bristol, S.; Lehnert, K. A.; Arko, R. A.; Peters, S. E.; Uhen, M. D.; Song, L.

    2014-12-01

    The U.S. Geological Survey (USGS) is an exemplar of the need for improved cyberinfrastructure for its vast holdings of invaluable physical geoscience data. Millions of discrete paleobiological and geological specimens lie in USGS warehouses and at the Smithsonian Institution. These specimens serve as the basis for many geologic maps and geochemical databases, and are a potential treasure trove of new scientific knowledge. The extent of this treasure is virtually unknown and inaccessible outside a small group of paleogeoscientists and geochemists. A team from the USGS, the Integrated Earth Data Applications (IEDA) facility, and the Paleobiology Database (PBDB) are working to expose information on paleontological and geochemical specimens for discovery by scientists and citizens. This project uses existing infrastructure of the System for Earth Sample Registration (SESAR) and PBDB, which already contains much of the fundamental data schemas that are necessary to accommodate USGS records. The project is also developing a new Linked Data interface for the USGS National Geochemical Database (NGDB). The International Geo Sample Number (IGSN) is the identifier that links samples between all systems. For paleontological specimens, SESAR and PBDB will be the primary repositories for USGS records, with a data syncing process to archive records within the USGS ScienceBase system. The process began with mapping the metadata fields necessary for USGS collections to the existing SESAR and PBDB data structures, while aligning them with the Observations & Measurements and Darwin Core standards. New functionality needed in SESAR included links to a USGS locality registry, fossil classifications, a spatial qualifier attribution for samples with sensitive locations, and acknowledgement of data and metadata licensing. The team is developing a harvesting mechanism to periodically transfer USGS records from within PBDB and SESAR to ScienceBase. For the NGDB, the samples are being registered with IGSNs in SESAR and the geochemical data are being published as Linked Data. This system allows the USGS collections to benefit from disciplinary and institutional strengths of the participating resources, while simultaneously increasing the discovery, accessibility, and citation of USGS physical collection holdings.

  5. Development of the EarthChem Geochronology and Thermochronology database: Collaboration of the EarthChem and EARTHTIME efforts

    NASA Astrophysics Data System (ADS)

    Walker, J. D.; Ash, J. M.; Bowring, J.; Bowring, S. A.; Deino, A. L.; Kislitsyn, R.; Koppers, A. A.

    2009-12-01

    One of the most onerous tasks in rigorous development of data reporting and databases for geochronological and thermochronological studies is to fully capture all of the metadata needed to completely document both the analytical work as well as the interpretation effort. This information is available in the data reduction programs used by researchers, but has proven difficult to harvest into either publications or databases. For this reason, the EarthChem and EARTHTIME efforts are collaborating to foster the next generation of data management and discovery for age information by integrating data reporting with data reduction. EarthChem is a community-driven effort to facilitate the discovery, access, and preservation of geochemical data of all types and to support research and enable new and better science. EARTHTIME is also a community-initiated project whose aim is to foster the next generation of high-precision geochronology and thermochoronology. In addition, collaboration with the CRONUS effort for cosmogenic radionuclides is in progress. EarthChem workers have met with groups working on the Ar-Ar, U-Pb, and (U-Th)/He systems to establish data reporting requirements as well as XML schemas to be used for transferring data from reduction programs to database. At present, we have prototype systems working for the U-Pb_Redux, ArArCalc, MassSpec, and Helios programs. In each program, the user can select to upload data and metadata to the GEOCHRON system hosted at EarthChem. There are two additional requirements for upload. The first is having a unique identifier (IGSN) obtained either manually or via web services contained within the reduction program from the SESAR system. The second is that the user selects whether the sample is to be available for discovery (public) or remain hidden (private). Search for data at the GEOCHRON portal can be done using age, method, mineral, or location parameters. Data can be downloaded in the full XML format for ingestion back into the reduction program or as abbreviated tables.

  6. Creating Access Points to Instrument-Based Atmospheric Data: Perspectives from the ARM Metadata Manager

    NASA Astrophysics Data System (ADS)

    Troyan, D.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) program has been collecting data from instruments in diverse climate regions for nearly twenty-five years. These data are made available to all interested parties at no cost via specially designed tools found on the ARM website (www.arm.gov). Metadata is created and applied to the various datastreams to facilitate information retrieval using the ARM website, the ARM Data Discovery Tool, and data quality reporting tools. Over the last year, the Metadata Manager - a relatively new position within the ARM program - created two documents that summarize the state of ARM metadata processes: ARM Metadata Workflow, and ARM Metadata Standards. These documents serve as guides to the creation and management of ARM metadata. With many of ARM's data functions spread around the Department of Energy national laboratory complex and with many of the original architects of the metadata structure no longer working for ARM, there is increased importance on using these documents to resolve issues from data flow bottlenecks and inaccurate metadata to improving data discovery and organizing web pages. This presentation will provide some examples from the workflow and standards documents. The examples will illustrate the complexity of the ARM metadata processes and the efficiency by which the metadata team works towards achieving the goal of providing access to data collected under the auspices of the ARM program.

  7. Efficient processing of MPEG-21 metadata in the binary domain

    NASA Astrophysics Data System (ADS)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  8. A model for enhancing Internet medical document retrieval with "medical core metadata".

    PubMed

    Malet, G; Munoz, F; Appleyard, R; Hersh, W

    1999-01-01

    Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.

  9. The I-Cleen Project (inquiring on Climate & Energy). Enhancing AN Enquiry-Based Approach to Earth System Sciences in Italian Classrooms

    NASA Astrophysics Data System (ADS)

    Cattadori, M.

    2010-12-01

    In the last years, the world of Italian school underwent some slow but deep transformation processes. One of the negative consequences - documented by specific studies - was the further weakening of the use of inquiring educational practices (or kinds of lessons) by science teachers. This occurred in a scholastic framework already traditionally little inclined to those. The I-CLEEN project (Inquiring on CLimate & Energy, www.icleen.museum ) was born in 2008 with the intent to react to (and contrast) this process (trend) by initiative of a staff of science teachers from different regions, all with many years’ experience, coordinated and supported by the local museum, the Natural Science Museum of Trento - Trento, Italy. I-CLEEN is a free instrument of cooperation for Italian teachers, aimed to support and enhance the practice of the inquiring education in explaining themes in range of Climate and Energy and generally about Earth System Sciences. This project is a consequence of what has been experienced and done in Italy by its creators within the Educational and Outreach program of ANDRILL (ANtarctic geological DRILLing). The core of the project is a database of resources potentially useful to a teacher preparing an inquiring lesson. These are selected by a staff following a specific selection policy. There are also lessons ready to be used in the classrooms, prepared according to a specific editorial standard. These are composed by a paper for the teacher and a paper for the student. The database is technically an information gateway and it is constantly enriched thanks to a job of critical research in the teachers’ practices or the worthiest international educational web projects. These are published in Italian or in bilingual format (Italian-English), always through explicit authorization by the authors and under a Creative Commons license when possible. This contribution illustrates details about this service which is on-line since December 2009 and is characterized by a peculiar use of the informatics technologies. Indeed, both the parts composing the project (site, resources database, publishers, and users) and their respective activities (editing, publishing, cataloguing, administration of web contents and users) are fully dealt by one open source web platform called LifeRay, purposely implemented for this project. Also the undertaking and the study of international projects and reference standards were accurate and broad, both in designing and developing the service (DESIRE project - Development of a European Service for Information on Research and Education) and creating the metadata (DCMI standard - Dublin Core Metadata Initiative - and LOM standard - Learning Object Metadata, IEEE 1484.12.1 2002). Thanks to this feature, it has been recently requested (June 2010) to let the I-CLEEN database interact with the one by the LRE project, the major information gateway of educational resources in the European Union.

  10. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally deployed DataONE quality service can achieve major efficiency gains by allowing member repositories to customize and use recommendations that fit their specific needs without having to create de novo infrastructure at their site.

  11. The New Online Metadata Editor for Generating Structured Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Shrestha, B.; Palanisamy, G.; Hook, L.; Killeffer, T.; Boden, T.; Cook, R. B.; Zolly, L.; Hutchison, V.; Frame, M. T.; Cialella, A. T.; Lazer, K.

    2014-12-01

    Nobody is better suited to "describe" data than the scientist who created it. This "description" about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [1] [2]. Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. How is OME helping Big Data Centers like ORNL DAAC? The ORNL DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers managed by the ESDIS Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and interoperability.References:[1] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94. [2] Wilson, Bruce E., et al. "Mercury Toolset for Spatiotemporal Metadata." NASA Technical Reports Server (NTRS) (2010).

  12. A data model and database for high-resolution pathology analytical image informatics.

    PubMed

    Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel

    2011-01-01

    The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.

  13. The Magnetics Information Consortium (MagIC)

    NASA Astrophysics Data System (ADS)

    Johnson, C.; Constable, C.; Tauxe, L.; Koppers, A.; Banerjee, S.; Jackson, M.; Solheid, P.

    2003-12-01

    The Magnetics Information Consortium (MagIC) is a multi-user facility to establish and maintain a state-of-the-art relational database and digital archive for rock and paleomagnetic data. The goal of MagIC is to make such data generally available and to provide an information technology infrastructure for these and other research-oriented databases run by the international community. As its name implies, MagIC will not be restricted to paleomagnetic or rock magnetic data only, although MagIC will focus on these kinds of information during its setup phase. MagIC will be hosted under EarthRef.org at http://earthref.org/MAGIC/ where two "integrated" web portals will be developed, one for paleomagnetism (currently functional as a prototype that can be explored via the http://earthref.org/databases/PMAG/ link) and one for rock magnetism. The MagIC database will store all measurements and their derived properties for studies of paleomagnetic directions (inclination, declination) and their intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). Ultimately, this database will allow researchers to study "on the internet" and to download important data sets that display paleo-secular variations in the intensity of the Earth's magnetic field over geological time, or that display magnetic data in typical Zijderveld, hysteresis/FORC and various magnetization/remanence diagrams. The MagIC database is completely integrated in the EarthRef.org relational database structure and thus benefits significantly from already-existing common database components, such as the EarthRef Reference Database (ERR) and Address Book (ERAB). The ERR allows researchers to find complete sets of literature resources as used in GERM (Geochemical Earth Reference Model), REM (Reference Earth Model) and MagIC. The ERAB contains addresses for all contributors to the EarthRef.org databases, and also for those who participated in data collection, archiving and analysis in the magnetic studies. Integration with these existing components will guarantee direct traceability to the original sources of the MagIC data and metadata. The MagIC database design focuses around the general workflow that results in the determination of typical paleomagnetic and rock magnetic analyses. This ensures that individual data points can be traced between the actual measurements and their associated specimen, sample, site, rock formation and locality. This permits a distinction between original and derived data, where the actual measurements are performed at the specimen level, and data at the sample level and higher are then derived products in the database. These relations will also allow recalculation of derived properties, such as site means, when new data becomes available for a specific locality. Data contribution to the MagIC database is critical in achieving a useful research tool. We have developed a standard data and metadata template that can be used to provide all data at the same time as publication. Software tools are provided to facilitate easy population of these templates. The tools allow for the import/export of data files in a delimited text format, and they provide some advanced functionality to validate data and to check internal coherence of the data in the template. During and after publication these standardized MagIC templates will be stored in the ERR database of EarthRef.org from where they can be downloaded at all times. Finally, the contents of these template files will be automatically parsed into the online relational database.

  14. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    USGS Publications Warehouse

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  15. USGIN ISO metadata profile

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata content. The use cases for the detailed content must be well understood, and the degree of metadata complexity should be determined by requirements for those use cases. The ISO standard provides sufficient flexibility that relatively simple metadata records can be created that will serve for text-indexed search/discovery, resource evaluation by a user reading text content from the metadata, and access to the resource via http, ftp, or well-known service protocols (e.g. Thredds; OGC WMS, WFS, WCS).

  16. Mobile object retrieval in server-based image databases

    NASA Astrophysics Data System (ADS)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  17. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and the lessons learned. References: [1] R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. [2] R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010). [3] Devarakonda, R.; Palanisamy, G.; Green, J.; Wilson, B. E. "Mercury: An Example of Effective Software Reuse for Metadata Management Data Discovery and Access", Eos Trans. AGU, 89(53), Fall Meet. Suppl., IN11A-1019 (2008).

  18. Controls and variability of solute and sedimentary fluxes in Antarctic and sub-Antarctic Environments

    NASA Astrophysics Data System (ADS)

    Zwolinski, Zbigniew

    2015-04-01

    The currently prepared SEDIBUD Book on "Source-to-Sink Fluxes in Undisturbed Cold Environments" (edited by Achim A. Beylich, John C. Dixon and Zbigniew Zwolinski and published by Cambridge University Press) is summarizing and synthesizing the achievements of the International Association of Geomorphologists` (I.A.G./A.I.G.) Working Group SEDIBUD (Sediment Budgets in Cold Environments), which has been active since 2005 (http://www.geomorph.org/wg/wgsb.html). The book comprises five parts. One of them is part about sub-Antarctic and Antarctic Environments. This part "Sub-Antarctic and Antarctic Environments" describes two different environments, namely oceanic and continental ones. Each part contains results of research on environmental drivers and rates of contemporary solute and sedimentary fluxes in selected sites. Apart from describing the environmental conditions of the whole continent of Antarctica and sub-Antarctic islands (Zb.Zwolinski, M.Kejna, A.N.Lastochkin, A.Zhirov, S.Boltramovich) this part of the book characterizes terrestrial polar oases free from multi-year ice and snow covers (Zb.Zwolinski). The detailed results of geoecological and sedimentological research come from different parts of Antarctica. Antarctic continental shelf (E.Isla) is an example of sub-Antarctic oceanic environment. South Shetlands, especially King George Island (Zb.Zwolinski, M.Kejna, G.Rachlewicz, I.Sobota, J.Szpikowski), is an example of sub-Antarctic terrestrial environment. Antarctic Peninsula (G.Vieira, M.Francelino, J.C.Fernandes) and surroundings of McMurdo Dry Valleys (W.B.Lyons, K.A.Welch, J.Levy, A.Fountain, D.McKnight) are examples of Antarctic continental environments. The key goals of the Antarctic and sub-Antarctic book chapters are following: (i) identify the main environmental drivers and rates of contemporary solute and sedimentary fluxes, and (ii) model possible effects of projected climate change on solute and sedimentary fluxes in cold climate environments. Solute and sediment transport in the streams of analyzed environments are constrained by the relatively short water runoff season that typically lasts from a few weeks to maximum of four months during the austral summer, for Antarctic and sub-Antarctic regions respectively. Because of high intensity of mechanical and chemical weathering processes solute and sediment transport are rather high within Antarctic environments. Weathering rates on slopes and magnitude of fluvial transport in relatively short streams control the intensity of denudational processes. Both mechanical and chemical denudation varies highly through sub-Antarctic and Antarctic environments. To generate accurate predictions of fluvial and denudational processes we must fully understand the actual geoecological processes, which in some places are under rapid change, e.g., the Antarctic Peninsula and sub-Antarctic islands.

  19. An Approach to Information Management for AIR7000 with Metadata and Ontologies

    DTIC Science & Technology

    2009-10-01

    metadata. We then propose an approach based on Semantic Technologies including the Resource Description Framework (RDF) and Upper Ontologies, for the...mandating specific metadata schemas can result in interoperability problems. For example, many standards within the ADO mandate the use of XML for metadata...such problems, we propose an archi- tecture in which different metadata schemes can inter operate. By using RDF (Resource Description Framework ) as a

  20. Making Interoperability Easier with NASA's Metadata Management Tool (MMT)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Reese, Mark; Pilone, Dan; Baynes, Katie

    2016-01-01

    While the ISO-19115 collection level metadata format meets many users' needs for interoperable metadata, it can be cumbersome to create it correctly. Through the MMT's simple UI experience, metadata curators can create and edit collections which are compliant with ISO-19115 without full knowledge of the NASA Best Practices implementation of ISO-19115 format. Users are guided through the metadata creation process through a forms-based editor, complete with field information, validation hints and picklists. Once a record is completed, users can download the metadata in any of the supported formats with just 2 clicks.

  1. Metazen – metadata capture for metagenomes

    DOE PAGES

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; ...

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  2. Metazen – metadata capture for metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, Jared; Harrison, Travis; Paczian, Tobias

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  3. WMT: The CSDMS Web Modeling Tool

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.

  4. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  5. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  6. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  7. openPDS: protecting the privacy of metadata through SafeAnswers.

    PubMed

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  8. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    PubMed Central

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S.; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  9. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  10. The RD-Connect Registry & Biobank Finder: a tool for sharing aggregated data and metadata among rare disease researchers.

    PubMed

    Gainotti, Sabina; Torreri, Paola; Wang, Chiuhui Mary; Reihs, Robert; Mueller, Heimo; Heslop, Emma; Roos, Marco; Badowska, Dorota Mazena; de Paulis, Federico; Kodra, Yllka; Carta, Claudio; Martìn, Estrella Lopez; Miller, Vanessa Rangel; Filocamo, Mirella; Mora, Marina; Thompson, Mark; Rubinstein, Yaffa; Posada de la Paz, Manuel; Monaco, Lucia; Lochmüller, Hanns; Taruscio, Domenica

    2018-05-01

    In rare disease (RD) research, there is a huge need to systematically collect biomaterials, phenotypic, and genomic data in a standardized way and to make them findable, accessible, interoperable and reusable (FAIR). RD-Connect is a 6 years global infrastructure project initiated in November 2012 that links genomic data with patient registries, biobanks, and clinical bioinformatics tools to create a central research resource for RDs. Here, we present RD-Connect Registry & Biobank Finder, a tool that helps RD researchers to find RD biobanks and registries and provide information on the availability and accessibility of content in each database. The finder concentrates information that is currently sparse on different repositories (inventories, websites, scientific journals, technical reports, etc.), including aggregated data and metadata from participating databases. Aggregated data provided by the finder, if appropriately checked, can be used by researchers who are trying to estimate the prevalence of a RD, to organize a clinical trial on a RD, or to estimate the volume of patients seen by different clinical centers. The finder is also a portal to other RD-Connect tools, providing a link to the RD-Connect Sample Catalogue, a large inventory of RD biological samples available in participating biobanks for RD research. There are several kinds of users and potential uses for the RD-Connect Registry & Biobank Finder, including researchers collaborating with academia and the industry, dealing with the questions of basic, translational, and/or clinical research. As of November 2017, the finder is populated with aggregated data for 222 registries and 21 biobanks.

  11. Seabed photographs, sediment texture analyses, and sun-illuminated sea floor topography in the Stellwagen Bank National Marine Sanctuary region off Boston, Massachusetts

    USGS Publications Warehouse

    Valentine, Page C.; Gallea, Leslie B.; Blackwood, Dann S.; Twomey, Erin R.

    2010-01-01

    The U.S. Geological Survey, in collaboration with National Oceanic and Atmospheric Administration's National Marine Sanctuary Program, conducted seabed mapping and related research in the Stellwagen Bank National Marine Sanctuary region from 1993 to 2004. The mapped area is approximately 3,700 km (1,100 nmi) in size and was subdivided into 18 quadrangles. An extensive series of sea-floor maps of the region based on multibeam sonar surveys has been published as paper maps and online in digital format (PDF, EPS, PS). In addition, 2,628 seabed-sediment samples were collected and analyzed and are in the usSEABED: Atlantic Coast Offshore Surficial Sediment Data Release. This report presents for viewing and downloading the more than 10,600 still seabed photographs that were acquired during the project. The digital images are provided in thumbnail, medium (1536 x 1024 pixels), and high (3071 x 2048) resolution. The images can be viewed by quadrangle on the U.S. Geological Survey Woods Hole Coastal and Marine Science Center's photograph database. Photograph metadata are embedded in each image in Exchangeable Image File Format and also provided in spreadsheet format. Published digital topographic maps and descriptive text for seabed features are included here for downloading and serve as context for the photographs. An interactive topographic map for each quadrangle shows locations of photograph stations, and each location is linked to the photograph database. This map also shows stations where seabed sediment was collected for texture analysis; the results of grain-size analysis and associated metadata are presented in spreadsheet format.

  12. The Future of ECHO: Evaluating Open Source Possibilities

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Mitchell, A. E.

    2012-12-01

    NASA's Earth Observing System ClearingHOuse (ECHO) is a format agnostic metadata repository supporting over 3000 collections and 100M science granules. ECHO exposes FTP and RESTful Data Ingest APIs in addition to both SOAP and RESTful search and order capabilities. Built on top of ECHO is a human facing search and order web application named Reverb. ECHO processes hundreds of orders, tens of thousands of searches, and 1-2M ingest actions each week. As ECHO's holdings, metadata format support, and visibility have increased, the ECHO team has received requests by non-NASA entities for copies of ECHO that can be run locally against their data holdings. ESDIS and the ECHO Team have begun investigations into various deployment and Open Sourcing models that can balance the real constraints faced by the ECHO project with the benefits of providing ECHO capabilities to a broader set of users and providers. This talk will discuss several release and Open Source models being investigated by the ECHO team along with the impacts those models are expected to have on the project. We discuss: - Addressing complex deployment or setup issues for potential users - Models of vetting code contributions - Balancing external (public) user requests versus our primary partners - Preparing project code for public release, including navigating licensing issues related to leveraged libraries - Dealing with non-free project dependencies such as commercial databases - Dealing with sensitive aspects of project code such as database passwords, authentication approaches, security through obscurity, etc. - Ongoing support for the released code including increased testing demands, bug fixes, security fixes, and new features.

  13. Reliable and Persistent Identification of Linked Data Elements

    NASA Astrophysics Data System (ADS)

    Wood, David

    Linked Data techniques rely upon common terminology in a manner similar to a relational database'vs reliance on a schema. Linked Data terminology anchors metadata descriptions and facilitates navigation of information. Common vocabularies ease the human, social tasks of understanding datasets sufficiently to construct queries and help to relate otherwise disparate datasets. Vocabulary terms must, when using the Resource Description Framework, be grounded in URIs. A current bestpractice on the World Wide Web is to serve vocabulary terms as Uniform Resource Locators (URLs) and present both human-readable and machine-readable representations to the public. Linked Data terminology published to theWorldWideWeb may be used by others without reference or notification to the publishing party. That presents a problem: Vocabulary publishers take on an implicit responsibility to maintain and publish their terms via the URLs originally assigned, regardless of the inconvenience such a responsibility may cause. Over the course of years, people change jobs, publishing organizations change Internet domain names, computers change IP addresses,systems administrators publish old material in new ways. Clearly, a mechanism is required to manageWeb-based vocabularies over a long term. This chapter places Linked Data vocabularies in context with the wider concepts of metadata in general and specifically metadata on the Web. Persistent identifier mechanisms are reviewed, with a particular emphasis on Persistent URLs, or PURLs. PURLs and PURL services are discussed in the context of Linked Data. Finally, historic weaknesses of PURLs are resolved by the introduction of a federation of PURL services to address needs specific to Linked Data.

  14. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  15. Achieving interoperability for metadata registries using comparative object modeling.

    PubMed

    Park, Yu Rang; Kim, Ju Han

    2010-01-01

    Achieving data interoperability between organizations relies upon agreed meaning and representation (metadata) of data. For managing and registering metadata, many organizations have built metadata registries (MDRs) in various domains based on international standard for MDR framework, ISO/IEC 11179. Following this trend, two pubic MDRs in biomedical domain have been created, United States Health Information Knowledgebase (USHIK) and cancer Data Standards Registry and Repository (caDSR), from U.S. Department of Health & Human Services and National Cancer Institute (NCI), respectively. Most MDRs are implemented with indiscriminate extending for satisfying organization-specific needs and solving semantic and structural limitation of ISO/IEC 11179. As a result it is difficult to address interoperability among multiple MDRs. In this paper, we propose an integrated metadata object model for achieving interoperability among multiple MDRs. To evaluate this model, we developed an XML Schema Definition (XSD)-based metadata exchange format. We created an XSD-based metadata exporter, supporting both the integrated metadata object model and organization-specific MDR formats.

  16. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  17. Making metadata usable in a multi-national research setting.

    PubMed

    Ellul, Claire; Foord, Joanna; Mooney, John

    2013-11-01

    SECOA (Solutions for Environmental Contrasts in Coastal Areas) is a multi-national research project examining the effects of human mobility on urban settlements in fragile coastal environments. This paper describes the setting up of a SECOA metadata repository for non-specialist researchers such as environmental scientists and tourism experts. Conflicting usability requirements of two groups - metadata creators and metadata users - are identified along with associated limitations of current metadata standards. A description is given of a configurable metadata system designed to grow as the project evolves. This work is of relevance for similar projects such as INSPIRE. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial Data Infrastructure (CGDI) metadata which is an implementation of North American Profile of ISO-19115. The comparison analyzes the two metadata against three simulated scenarios about discovering needed 3D geo-spatial datasets. Considering specific metadata about 3D geospatial models, the proposed metadata-set has six additional classes on geometric dimension, level of detail, geometric modeling, topology, and appearance information. In addition classes on data acquisition, preparation, and modeling, and physical availability have been specialized for 3D geospatial models.

  19. Interactive Visualization Systems and Data Integration Methods for Supporting Discovery in Collections of Scientific Information

    DTIC Science & Technology

    2011-05-01

    iTunes illustrate the difference between the centralized approach of digital library systems and the distributed approach of container file formats...metadata in a container file format. Apple’s iTunes uses a centralized metadata approach and allows users to maintain song metadata in a single...one iTunes library to another the metadata must be copied separately or reentered in the new library. This demonstrates the utility of storing metadata

  20. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    NASA Technical Reports Server (NTRS)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

Top