Science.gov

Sample records for abstract metadata record

  1. IEEE conference record -- Abstracts

    SciTech Connect

    Not Available

    1994-01-01

    This conference covers the following areas: computational plasma physics; vacuum electronic; basic phenomena in fully ionized plasmas; plasma, electron, and ion sources; environmental/energy issues in plasma science; space plasmas; plasma processing; ball lightning/spherical plasma configurations; plasma processing; fast wave devices; magnetic fusion; basic phenomena in partially ionized plasma; dense plasma focus; plasma diagnostics; basic phenomena in weakly ionized gases; fast opening switches; MHD; fast z-pinches and x-ray lasers; intense ion and electron beams; laser-produced plasmas; microwave plasma interactions; EM and ETH launchers; solid state plasmas and switches; intense beam microwaves; and plasmas for lighting. Separate abstracts were prepared for 416 papers in this conference.

  2. Improving Metadata Compliance for Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  3. IEEE conference record--Abstracts

    SciTech Connect

    Not Available

    1992-01-01

    The following topics were covered in this meeting: basic plasma phenomena and plasma waves; plasma diagnostics; space plasma diagnostics; magnetic fusion; electron, ion and plasma sources; intense electron and ion beams; intense beam microwaves; fast wave M/W devices; microwave plasma interactions; plasma focus; ultrafast Z-pinches; plasma processing; electrical gas discharges; fast opening switches; magnetohydrodynamics; electromagnetic and electrothermal launchers; x-ray lasers; computational plasma science; solid state plasmas and switches; environmental/energy issues in plasma science; vacuum electronics; plasmas for lighting; gaseous electronics; and ball lightning and other spherical plasmas. Separate abstracts were prepared for 278 papers of this conference.

  4. Open Access Metadata, Catalogers, and Vendors: The Future of Cataloging Records

    ERIC Educational Resources Information Center

    Flynn, Emily Alinder

    2013-01-01

    The open access (OA) movement is working to transform scholarly communication around the world, but this philosophy can also apply to metadata and cataloging records. While some notable, large academic libraries, such as Harvard University, the University of Michigan, and the University of Cambridge, released their cataloging records under OA…

  5. CCR+: Metadata Based Extended Personal Health Record Data Model Interoperable with the ASTM CCR Standard

    PubMed Central

    Park, Yu Rang; Yoon, Young Jo; Jang, Tae Hun; Seo, Hwa Jeong

    2014-01-01

    Objectives Extension of the standard model while retaining compliance with it is a challenging issue because there is currently no method for semantically or syntactically verifying an extended data model. A metadata-based extended model, named CCR+, was designed and implemented to achieve interoperability between standard and extended models. Methods Furthermore, a multilayered validation method was devised to validate the standard and extended models. The American Society for Testing and Materials (ASTM) Community Care Record (CCR) standard was selected to evaluate the CCR+ model; two CCR and one CCR+ XML files were evaluated. Results In total, 188 metadata were extracted from the ASTM CCR standard; these metadata are semantically interconnected and registered in the metadata registry. An extended-data-model-specific validation file was generated from these metadata. This file can be used in a smartphone application (Health Avatar CCR+) as a part of a multilayered validation. The new CCR+ model was successfully evaluated via a patient-centric exchange scenario involving multiple hospitals, with the results supporting both syntactic and semantic interoperability between the standard CCR and extended, CCR+, model. Conclusions A feasible method for delivering an extended model that complies with the standard model is presented herein. There is a great need to extend static standard models such as the ASTM CCR in various domains: the methods presented here represent an important reference for achieving interoperability between standard and extended models. PMID:24627817

  6. Electronic Health Records Data and Metadata: Challenges for Big Data in the United States.

    PubMed

    Sweet, Lauren E; Moulaison, Heather Lea

    2013-12-01

    This article, written by researchers studying metadata and standards, represents a fresh perspective on the challenges of electronic health records (EHRs) and serves as a primer for big data researchers new to health-related issues. Primarily, we argue for the importance of the systematic adoption of standards in EHR data and metadata as a way of promoting big data research and benefiting patients. EHRs have the potential to include a vast amount of longitudinal health data, and metadata provides the formal structures to govern that data. In the United States, electronic medical records (EMRs) are part of the larger EHR. EHR data is submitted by a variety of clinical data providers and potentially by the patients themselves. Because data input practices are not necessarily standardized, and because of the multiplicity of current standards, basic interoperability in EHRs is hindered. Some of the issues with EHR interoperability stem from the complexities of the data they include, which can be both structured and unstructured. A number of controlled vocabularies are available to data providers. The continuity of care document standard will provide interoperability in the United States between the EMR and the larger EHR, potentially making data input by providers directly available to other providers. The data involved is nonetheless messy. In particular, the use of competing vocabularies such as the Systematized Nomenclature of Medicine-Clinical Terms, MEDCIN, and locally created vocabularies inhibits large-scale interoperability for structured portions of the records, and unstructured portions, although potentially not machine readable, remain essential. Once EMRs for patients are brought together as EHRs, the EHRs must be managed and stored. Adequate documentation should be created and maintained to assure the secure and accurate use of EHR data. There are currently a few notable international standards initiatives for EHRs. Organizations such as Health Level Seven

  7. Electronic Health Records Data and Metadata: Challenges for Big Data in the United States.

    PubMed

    Sweet, Lauren E; Moulaison, Heather Lea

    2013-12-01

    This article, written by researchers studying metadata and standards, represents a fresh perspective on the challenges of electronic health records (EHRs) and serves as a primer for big data researchers new to health-related issues. Primarily, we argue for the importance of the systematic adoption of standards in EHR data and metadata as a way of promoting big data research and benefiting patients. EHRs have the potential to include a vast amount of longitudinal health data, and metadata provides the formal structures to govern that data. In the United States, electronic medical records (EMRs) are part of the larger EHR. EHR data is submitted by a variety of clinical data providers and potentially by the patients themselves. Because data input practices are not necessarily standardized, and because of the multiplicity of current standards, basic interoperability in EHRs is hindered. Some of the issues with EHR interoperability stem from the complexities of the data they include, which can be both structured and unstructured. A number of controlled vocabularies are available to data providers. The continuity of care document standard will provide interoperability in the United States between the EMR and the larger EHR, potentially making data input by providers directly available to other providers. The data involved is nonetheless messy. In particular, the use of competing vocabularies such as the Systematized Nomenclature of Medicine-Clinical Terms, MEDCIN, and locally created vocabularies inhibits large-scale interoperability for structured portions of the records, and unstructured portions, although potentially not machine readable, remain essential. Once EMRs for patients are brought together as EHRs, the EHRs must be managed and stored. Adequate documentation should be created and maintained to assure the secure and accurate use of EHR data. There are currently a few notable international standards initiatives for EHRs. Organizations such as Health Level Seven

  8. Metadata Leadership

    ERIC Educational Resources Information Center

    Tennant, Roy

    2004-01-01

    Libraries must increasingly accommodate bibliographic records encoded with a variety of standards and emerging standards, including Dublin Core, MODS, and VRA Core. The problem is that many libraries still rely solely on MARC and AACR2. The best-trained professionals to lead librarians through the metadata maze are catalogers. Catalogers…

  9. Factors Affecting Accuracy of Data Abstracted from Medical Records

    PubMed Central

    Zozus, Meredith N.; Pieper, Carl; Johnson, Constance M.; Johnson, Todd R.; Franklin, Amy; Smith, Jack; Zhang, Jiajie

    2015-01-01

    Objective Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA. Methods Candidate factors were identified from published reports of MRA. Content validity of the top candidate factors was assessed via a four-round two-group Delphi process with expert abstractors with experience in clinical research, registries, and quality improvement. The resulting coded factors were categorized into a control theory-based framework of MRA. Coverage of the framework was evaluated using the recent published literature. Results Analysis of the identified articles yielded 292 unique factors that affect the accuracy of abstracted data. Delphi processes overall refuted three of the top factors identified from the literature based on importance and five based on reliability (six total factors refuted). Four new factors were identified by the Delphi. The generated framework demonstrated comprehensive coverage. Significant underreporting of MRA methodology in recent studies was discovered. Conclusion The framework generated from this research provides a guide for planning data quality assurance and control for studies using MRA. The large number and variability of factors indicate that while prospective quality assurance likely increases the accuracy of abstracted data, monitoring the accuracy during the abstraction process is also required. Recent studies reporting research results based on MRA rarely reported data quality assurance or control measures, and even less frequently reported data quality metrics with research results. Given

  10. Use of KLV to combine metadata, camera sync, and data acquisition into a single video record

    NASA Astrophysics Data System (ADS)

    Hightower, Paul

    2015-05-01

    SMPTE has designed in significant data spaces in each frame that may be used to store time stamps and other time sensitive data. There are metadata spaces in both the analog equivalent of the horizontal blanking referred to as the Horizontal Ancillary (HANC) space and in the analog equivalent of the vertical interval blanking lines referred to as the Vertical Ancillary (VANC) space. The HANC space is very crowded with many data types including information about frame rate and format, 16 channels of audio sound bites, copyright controls, billing information and more than 2,000 more elements. The VANC space is relatively unused by cinema and broadcasters which makes it a prime target for use in test, surveillance and other specialized applications. Taking advantage of the SMPTE structures, one can design and implement custom data gathering and recording systems while maintaining full interoperability with standard equipment. The VANC data space can be used to capture image relevant data and can be used to overcome transport latency and diminished image quality introduced by the use of compression.

  11. Metadata Activities in Biology

    SciTech Connect

    Inigo, Gil San; HUTCHISON, VIVIAN; Frame, Mike; Palanisamy, Giri

    2010-01-01

    The National Biological Information Infrastructure program has advanced the biological sciences ability to standardize, share, integrate and synthesize data by making the metadata program a core of its activities. Through strategic partnerships, a series of crosswalks for the main biological metadata specifications have enabled data providers and international clearinghouses to aggregate and disseminate tens of thousands of metadata sets describing petabytes of data records. New efforts at the National Biological Information Infrastructure are focusing on better metadata creation and curation tools, semantic mediation for data discovery and other curious initiatives.

  12. Simplified Metadata Curation via the Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  13. USGIN ISO metadata profile

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2011-12-01

    content. The use cases for the detailed content must be well understood, and the degree of metadata complexity should be determined by requirements for those use cases. The ISO standard provides sufficient flexibility that relatively simple metadata records can be created that will serve for text-indexed search/discovery, resource evaluation by a user reading text content from the metadata, and access to the resource via http, ftp, or well-known service protocols (e.g. Thredds; OGC WMS, WFS, WCS).

  14. IEEE International conference on plasma science: Conference record--Abstracts

    SciTech Connect

    Not Available

    1993-01-01

    The conference covered the following topics: basic plasma physics; vacuum electronics; gaseous and electrical gas discharges; laser-produced plasma; space plasmas; computational plasma science; plasma diagnostics; electron, ion and plasma sources; intense electron and ion beams; intense beam microwaves; fast wave M/W devices; microwave-plasma interactions; magnetic fusion; MHD; plasma focus; ultrafast z-pinches and x-ray lasers; plasma processing; fast-opening switches; EM and ETH launchers; solid-state plasmas and switches; plasmas for lighting; ball lightning and spherical plasma configurations; and environmental/energy issues. Separate abstracts were prepared for 379 items in this conference.

  15. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    André, François; Brissebrat, Guillaume; Fleury, Laurence; Gaillardet, Jérôme; Nord, Guillaume

    2014-05-01

    RBV (Réseau des Bassins Versants) is an initiative to consolidate the national efforts made by more than 15 elementary observatories belonging to various French research institutions (CNRS, Universities, INRA, IRSTEA, IRD) that study river and drainage basins. RBV is a part of a global initiative to create a network of observatories for investigating Earth's surface processes. The RBV Metadata Catalogue aims to give an unified vision of the work produced by every observatory to both the members of the RBV network and any external person involved in this domain of research. Another goal is to share this information with other catalogues through the compliance with the ISO19115 standard and the INSPIRE directive and the ability of being harvested (globally or partially). Metadata management is heterogeneous among observatories. The catalogue is designed to face this situation with the following main features: -Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. -Three hierachical levels: Metadata records may describe either an observatory in general, one of its experimental site or a dataset produced by instruments. -Multilingualism: Metadata can be entered in several configurable languages. The catalogue provides many other feature such as search and browse mechanisms to find or discover records. The RBV metadata catalogue associates a CSW metadata server (Geosource) and a JEE application. The CSW server is in charge of the persistence of the metadata while the JEE application both wraps CSW calls and define the user interface. The latter is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. The catalogue is accessible at the following address: http://portailrbv.sedoo.fr/ Next steps will target the following points: -Description of sensors in accordance

  16. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  17. Lessons in Medical Record Abstraction from the Prostate, Lung, Colorectal, and Ovarian (PLCO) National Screening Trial.

    PubMed

    Bazzi, Latifa; Lamerato, Lois E; Varner, Julie; Shambaugh, Vicki L; Cordes, Jill E; Ragard, Lawrence R; Marcus, Pamela M

    2015-01-01

    The most rigorous and accurate approach to evaluating clinical events in cancer screening studies is to use data obtained through medical record abstraction (MRA). Although MRA is complex, the particulars of the procedure-such as the specific training and quality assurance processes, challenges of implementation, and other factors that influence the quality of abstraction--are usually not described in reports of studies that employed the technique. In this paper, we present the details of MRA activities used in the Prostate, Lung, Colorectal, and Ovarian (PLCO) Cancer Screening Trial, which used MRA to determine primary and secondary outcomes and collect data on other clinical events. We describe triggers of the MRA cycle and the specific tasks that were part of the abstraction process. We also discuss training and certification of abstracting staff, and technical methods and communication procedures used for data quality assurance. We include discussion of challenges faced and lessons learned.

  18. Mercury Toolset for Spatiotemporal Metadata

    NASA Technical Reports Server (NTRS)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  19. Mercury Toolset for Spatiotemporal Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  20. Metadata in Scientific Dialects

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  1. Metadata Evaluation and Improvement Case Studies

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Kozimor, J.; Powers, L. A.; Gordon, S.

    2015-12-01

    Tools have been developed for evaluating metadata records and collections for completeness in terms of specific recommendations or organizational goals and to provide guidance for improving compliance of metadata with those recommendations. These tools have been applied using a several metadata recommendations (OGC-CSW, DataCite, NASA Unified Metadata Model) and metadata dialects used by several organizations: Climate Data Initiative metadata from NASA DAACs in ECHO, DIF, and ISO 19115-2 US Geological Survey metadata from ScienceBase in CSDGM ACADIS Metadata from NCAR's Earth Observation Lab in ISO 19115-2. The results of this work are designed to help managers understand metadata recommendations (e.g. OGC Catalog Services for the Web, DataCite, and others) and the impact of those recommendations in terms of the dialects used in their organizations (e.g. DIF, CSDGM , ISO). They include comparisons between metadata recommendations and dialect capabilities, scoring of metadata records in terms of amount of missing content, and identification of specific improvement strategies for particular collections. This information is included in the Earth Science Information Partnership (ESIP) Wiki to encourage broad dissemination and participation.

  2. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  3. Mining the Metadata Quarries.

    ERIC Educational Resources Information Center

    Sutton, Stuart A., Ed.; Guenther, Rebecca; McCallum, Sally; Greenberg, Jane; Tennis, Joseph T.; Jun, Wang

    2003-01-01

    This special section of the "Bulletin" includes an introduction and the following articles: "New Metadata Standards for Digital Resources: MODS (Metadata Object and Description Schema) and METS (Metadata Encoding and Transmission Standard)"; "Metadata Generation: Processes, People and Tools"; "Data Collection for Controlled Vocabulary…

  4. Applications of the LBA-ECO Metadata Warehouse

    NASA Astrophysics Data System (ADS)

    Wilcox, L.; Morrell, A.; Griffith, P. C.

    2006-05-01

    The LBA-ECO Project Office has developed a system to harvest and warehouse metadata resulting from the Large-Scale Biosphere Atmosphere Experiment in Amazonia. The harvested metadata is used to create dynamically generated reports, available at www.lbaeco.org, which facilitate access to LBA-ECO datasets. The reports are generated for specific controlled vocabulary terms (such as an investigation team or a geospatial region), and are cross-linked with one another via these terms. This approach creates a rich contextual framework enabling researchers to find datasets relevant to their research. It maximizes data discovery by association and provides a greater understanding of the scientific and social context of each dataset. For example, our website provides a profile (e.g. participants, abstract(s), study sites, and publications) for each LBA-ECO investigation. Linked from each profile is a list of associated registered dataset titles, each of which link to a dataset profile that describes the metadata in a user-friendly way. The dataset profiles are generated from the harvested metadata, and are cross-linked with associated reports via controlled vocabulary terms such as geospatial region. The region name appears on the dataset profile as a hyperlinked term. When researchers click on this link, they find a list of reports relevant to that region, including a list of dataset titles associated with that region. Each dataset title in this list is hyperlinked to its corresponding dataset profile. Moreover, each dataset profile contains hyperlinks to each associated data file at its home data repository and to publications that have used the dataset. We also use the harvested metadata in administrative applications to assist quality assurance efforts. These include processes to check for broken hyperlinks to data files, automated emails that inform our administrators when critical metadata fields are updated, dynamically generated reports of metadata records that link

  5. Metazen – metadata capture for metagenomes

    DOE PAGES

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  6. Metazen – metadata capture for metagenomes

    SciTech Connect

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  7. Metazen – metadata capture for metagenomes

    PubMed Central

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  8. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/

  9. A Quantitative Categorical Analysis of Metadata Elements in Image-Applicable Metadata Schemas.

    ERIC Educational Resources Information Center

    Greenberg, Jane

    2001-01-01

    Reports on a quantitative categorical analysis of metadata elements in the Dublin Core, VRA (Visual Resource Association) Core, REACH (Record Export for Art and Cultural Heritage), and EAD (Encoded Archival Description) metadata schemas, all of which can be used for organizing and describing images. Introduces a new schema comparison methodology…

  10. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  11. Handling Metadata in a Neurophysiology Laboratory

    PubMed Central

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G.; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  12. Handling Metadata in a Neurophysiology Laboratory.

    PubMed

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  13. Metadata for Web Resources: How Metadata Works on the Web.

    ERIC Educational Resources Information Center

    Dillon, Martin

    This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…

  14. DataFinder: Using Ontologies and Reasoning to Enhance Metadata Search

    NASA Astrophysics Data System (ADS)

    Russ, T. A.; Chalupsky, H.

    2005-12-01

    intermediate data products used in seismic wave propagation simulations. It is the output of a 3-dimensional wave propagation velocity model of the subsurface geology. This mesh can be computed using any one of several models which have different characteristics. Each velocity mesh has metadata that records the specific model used. But the models also fall into general classes such as 1-dimensional and 3-dimensional models. Such classification of models can be used to allow retrieval of velocity meshes created by any 3-dimensional model, without the user being required to specify all of the particular model names. This provides an abstraction over the attributes and values stored with the dataset. Other mappings are used to present a uniform interface to metadata information that is stored using different attribute names or even different organizational schemes. Some data products produced as the output of a computational workflow retain all of the metadata information, whereas others rely on following a chain of references to the intermediate products. By using an ontology as an abstraction layer, DataFinder thus allows users to locate end data products using domain-level descriptions instead of program-specific and varied metadata attributes.

  15. Metadata management staging system

    SciTech Connect

    2013-08-01

    Django application providing a user-interface for building a file and metadata management system. An evolution of our Node.js and CouchDb metadata management system. This one focuses on server functionality and uses a well-documented, rational and REST-ful API for data access.

  16. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  17. IEEE conference record -- abstracts: 1995 IEEE international conference on plasma science

    SciTech Connect

    1995-12-31

    Topics covered at this meeting are: computational plasma physics; slow wave devices; basic phenomena in fully ionized plasmas; microwave-plasma interactions; space plasmas; fast wave devices; plasma processing; plasma, ion, and electron sources; vacuum microelectronics; basic phenomena in partially ionized gases; microwave systems; plasma diagnostics; magnetic fusion theory/experiment; fast opening switches; laser-produced plasmas; dense plasma focus; intense ion and electron beams; plasmas for lighting; fast z-pinches and x-ray lasers; intense beam microwaves; ball lightning/spherical plasma configuration; environmental plasma science; EM and ETH launchers; and environmental/energy issues in plasma science. Separate abstracts were prepared for most of the individual papers.

  18. No More Metadata!

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2014-05-01

    For well-known technologically motivated reasons, communities have developed the distinction between data and metadata. Mainly this was because data were too big to analyze, and often too complex as well. Therefore, metadata were established as a kind of summaries which allow browsing and search, albeit only on the criteria preselected by the metadata provider. The result is that metadata are considered smart, queryable, and agile whereas the underlying data typically are seen as big, difficult to understand and interpret, unavailable for analysis. Common sense has it that in general data should be touched upon only once a meaningful focusing and downsizing of the topical dataset has been achieved through elaborate metadata retrieval. With the advent of Big Data technology we are in a position ot overcome this age-old digital divide. Utilizing NewSQL concepts, query techniques go beyond the classical set paradigm and can also handle large graphs and arrays. Access and retrieval can be accomplished on a high semantic level. In our presentation we show, on the example of array data, how the data/metadata divide can be effectively eliminated today. We will do so by showing queries combining metadata and ground-truth data retrieval will be shown for SQL and XQuery.

  19. IEEE conference record -- Abstracts: 1996 IEEE international conference on plasma science

    SciTech Connect

    1996-12-31

    This meeting covered the following topics: space plasmas; non-equilibrium plasma processing; computer simulation of vacuum power tubes; vacuum microelectronics; microwave systems; basic phenomena in partially ionized gases -- gaseous electronics, electrical discharges; ball lightning/spherical plasma configuration; plasma diagnostics; plasmas for lighting; dense plasma focus; intense ion and electron beams; plasma, ion, and electron sources; flat panel displays; fast z-pinches and x-ray lasers; environmental/energy issues in plasma science; thermal plasma processing; computational plasma physics; magnetic confinement fusion; microwave-plasma interactions; space plasma engineering; EM and ETH launchers; fast wave devices; intense beam microwaves; slow wave devices; space plasma measurements; basic phenomena in fully ionized plasma -- waves, instabilities, plasma theory, etc; plasma closing switches; fast opening switches; and laser-produced plasma. Separate abstracts were prepared for most papers in this conference.

  20. What Metadata Principles Apply to Scientific Data?

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.

    2014-12-01

    Information researchers and professionals based in the library and information science fields often approach their work through developing and applying defined sets of principles. For example, for over 100 years, the evolution of library cataloging practice has largely been driven by debates (which are still ongoing) about the fundamental principles of cataloging and how those principles should manifest in rules for cataloging. Similarly, the development of archival research and practices over the past century has proceeded hand-in-hand with the emergence of principles of archival arrangement and description, such as maintaining the original order of records and documenting provenance. This project examines principles related to the creation of metadata for scientific data. The presentation will outline: 1) how understandings and implementations of metadata can range broadly depending on the institutional context, and 2) how metadata principles developed by the library and information science community might apply to metadata developments for scientific data. The development and formalization of such principles would contribute to the development of metadata practices and standards in a wide range of institutions, including data repositories, libraries, and research centers. Shared metadata principles would potentially be useful in streamlining data discovery and integration, and would also benefit the growing efforts to formalize data curation education.

  1. A micro-remote centered compliance suspension for contact recording head (abstract)

    NASA Astrophysics Data System (ADS)

    Nakao, M.; Sugiyama, S.; Hatamura, Y.; Hamaguchi, T.; Watanabe, K.

    1996-04-01

    Nowadays contact recording has become one of the most important mechanisms for high-density recording of HDD. The contact recording head is always subject to friction when sliding. If the friction causes vibration on the sliding head, the swaying of the head is likely to lead to some undesirable bit shifts. The authors previously measured the head movement in sliding with a Watrous-type suspension where it was verified that the front edge of the head fell down to the disk surface because the rotational center of the pitching motion of the head was located above the sliding surface, inducing a heavy sticking. In order to prevent the sticking from occurring, we have already proposed RCC (remote centered compliance) suspension (15 mm in thickness) which consists of a pair of inclined plates. With this suspension, the front edge of the head is raised up because of the enhanced location of the rotational center which is now below the sliding surface. In this article we present a newly designed micro-RCC suspension (125 μm in thickness) for an acutal small MR head (1×1×0.5 mm) of the contact recording. It has two pairs of the inclined plates structure for two-axis frictions which are caused by seeking and tracking motions of the head, respectively. This suspension is fabricated from a 125 μm thick sheet of polyimide using the ultraviolet laser beam. We evaluate its movement in sliding at low speed (50 μm/s) and at high speed (2 m/s) under a 10 mN load on a sputtered disk, respectively. The normal and frictional forces are measured by a micro two-axis force sensor (0.01 mN resolution) with parallel-plate structure and the pitching motion of the head is measured by an inclination sensor by means of laser reflection angle measurement (10 μrad resolution). From the experiment at low speed, we have clarified that the head yields a stable friction (0.15±0.02 mN) and has the nose-up attitude (0 to + 100 μrad). In addition, from the evaluation at high speed, we have

  2. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  3. ATLAS Metadata Task Force

    SciTech Connect

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  4. Metadata, PICS and Quality.

    ERIC Educational Resources Information Center

    Armstrong, C. J.

    1997-01-01

    Discusses PICS (Platform for Internet Content Selection), the Centre for Information Quality Management (CIQM), and metadata. Highlights include filtering networked information; the quality of information; and standardizing search engines. (LRW)

  5. Streamlining geospatial metadata in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  6. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    PubMed Central

    2013-01-01

    Abstract Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  7. Mercury Metadata Toolset

    SciTech Connect

    2009-09-08

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  8. Mercury Metadata Toolset

    2009-09-08

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additionalmore » metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.« less

  9. Metadata aided run selection at ATLAS

    NASA Astrophysics Data System (ADS)

    Buckingham, R. M.; Gallas, E. J.; C-L Tseng, J.; Viegas, F.; Vinek, E.; ATLAS Collaboration

    2011-12-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser" makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  10. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  11. A standard for measuring metadata quality in spectral libraries

    NASA Astrophysics Data System (ADS)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480

  12. Localisation Standards and Metadata

    NASA Astrophysics Data System (ADS)

    Anastasiou, Dimitra; Vázquez, Lucia Morado

    In this paper we describe a localisation process and focus on localisation standards. Localisation standards provide a common framework for localisers, including authors, translators, engineers, and publishers. Standards with rich semantic metadata generally facilitate, accelerate, and improve the localisation process. We focus particularly on the XML Localisation Interchange File Format (XLIFF), and present our experiment and results. An html file after converted into XLIFF, travels through different commercial localisation tools, and as a result, data as well as metadata are stripped away. Interoperability between file formats and application is a key issue for localisation and thus we stress how this can be achieved.

  13. Metadata Standards and Workflow Systems

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2012-12-01

    All modern workflow systems include mechanisms for recording inputs, outputs and processes. These descriptions can include details required to reproduce the workflows exactly and, in some cases, can include virtual images of the hardware and operating system. There are several on-going and emerging standards for representing these detailed workflows including the Open Provenance Model (OPM) and the W3C PROV. At the same time, ISO metadata standards include a simple provenance or lineage model that includes many important elements of workflows. The ISO model could play a critical role in sharing and discovering workflow information for collections and perhaps in recording some details in granules. In order for this goal to be reached, connections between the detailed standards and ISO must be understood and conventions for using them must be developed.

  14. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  15. Mining scientific data archives through metadata generation

    SciTech Connect

    Springmeyer, R.; Werner, N.; Long, J.

    1997-04-01

    Data analysis and management tools typically have not supported the documenting of data, so scientists must manually maintain all information pertaining to the context and history of their work. This metadata is critical to effective retrieval and use of the masses of archived data, yet little of it exists on-line or in an accessible format. Exploration of archived legacy data typically proceeds as a laborious process, using commands to navigate through file structures on several machines. This file-at-a-time approach needs to be replaced with a model that represents data as collections of interrelated objects. The tools that support this model must focus attention on data while hiding the complexity of the computational environment. This problem was addressed by developing a tool for exploring large amounts of data in UNIX directories via automatic generation of metadata summaries. This paper describes the model for metadata summaries of collections and the Data Miner tool for interactively traversing directories and automatically generating metadata that serves as a quick overview and index to the archived data. The summaries include thumbnail images as well as links to the data, related directories, and other metadata. Users may personalize the metadata by adding a title and abstract to the summary, which is presented as an HTML page viewed with a World Wide Web browser. We have designed summaries for 3 types of collections of data: contents of a single directory; virtual directories that represent relations between scattered files; and groups of related calculation files. By focusing on the scientists` view of the data mining task, we have developed techniques that assist in the ``detective work `` of mining without requiring knowledge of mundane details about formats and commands. Experiences in working with scientists to design these tools are recounted.

  16. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thönssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  17. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  18. The Year of Metadata.

    ERIC Educational Resources Information Center

    Wason, Tom; Griffin, Steve

    1997-01-01

    Users of the World Wide Web have recognized the need for better search strategies and mechanisms. This article discusses the Educom NLII Instructional Management System (IMS) Project specifications and Java-based software for metadata that will provide information about Web-based materials not currently obtainable with traditional search engines.…

  19. A Pan-European and Cross-Discipline Metadata Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes; Lautenschlager, Michael

    2014-05-01

    In recent years, significant investments have been made to create a pan-European e-infrastructure supporting multiple and diverse research communities. This led to the establishment of the community-driven European Data Infrastructure (EUDAT) project that implements services to tackle the specific challenges of international and interdisciplinary research data management. The EUDAT metadata service B2FIND plays a central role in this context as a repository and a search portal for the diverse metadata collected from heterogeneous sources. For this we built up a comprehensive joint metadata catalogue and an open data portal and offer support for new communities interested in publishing their data within EUDAT. The implemented metadata ingestion workflow consists in three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries. The semantic mapping of the non-uniform community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. Finally the mapped records are uploaded as datasets to the catalogue and displayed in the portal. The homogenisation of the different community specific data models and vocabularies enables not only the unique presentation of these datasets as tables of field-value pairs but also the faceted, spatial and temporal search in the B2FIND metadata portal. Furthermore the service provides transparent access to the scientific data objects through the given references in the metadata. We present here the functionality and the features of the B2FIND service and give an outlook of further developments.

  20. Comparative Study of Metadata Standards and Metadata Repositories

    NASA Astrophysics Data System (ADS)

    Pahuja, Gunjan

    2011-12-01

    Lot of work is being accomplished in the national and international standards communities to reach a consensus on standardizing metadata and repositories for organizing the metadata. Descriptions of several metadata standards and their importance to statistical agencies are provided in this paper. Existing repositories based on these standards help to promote interoperability between organizations, systems, and people. Repositories are vehicles for collecting, managing, comparing, reusing, and disseminating the designs, specifications, procedures, and outputs of systems, e.g. statistical surveys.

  1. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Manipon, G. M.; Wilson, B. D.; Hua, H.

    2013-12-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom/RSS ';casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain & variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into 'semantic mashups' in the browser. It is straightforward to transform sets of XML metadata, including ISO 19139, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin Core, GeoRSS, etc. All scientific metadata should be part of the LOD world. In addition, we developed an 'instant' drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. Figure 1 shows the first version of the interface for 'instant drill down' into the ECHO datasets. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the 'word cloud' of values for each facet. The display section for each dataset includes the important metadata fields, a full

  2. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an Open

  3. Partnerships To Mine Unexploited Sources of Metadata.

    ERIC Educational Resources Information Center

    Reynolds, Regina Romano

    This paper discusses the metadata created for other purposes as a potential source of bibliographic data. The first section addresses collecting metadata by means of templates, including the Nordic Metadata Project's Dublin Core Metadata Template. The second section considers potential partnerships for re-purposing metadata for bibliographic use,…

  4. Metadata Standards in Theory and Practice: The Human in the Loop

    NASA Astrophysics Data System (ADS)

    Yarmey, L.; Starkweather, S.

    2013-12-01

    Metadata standards are meant to enable interoperability through common, well-defined structures and are a foundation for broader cyberinfrastructure efforts. Standards are central to emerging technologies such as metadata brokering tools supporting distributed data search. However, metadata standards in practice are often poor indicators of standardized, readily interoperable metadata. The International Arctic Systems for Observing the Atmosphere (IASOA) data portal provides discovery and access tools for aggregated datasets from ten long-term international Arctic atmospheric observing stations. The Advanced Cooperative Arctic Data and Information Service (ACADIS) Arctic Data Explorer brokers metadata to provide distributed data search across Arctic repositories. Both the IASOA data portal and the Arctic Data Explorer rely on metadata and metadata standards to support value-add services. Challenges have included: translating between different standards despite existing crosswalks, diverging implementation practices of the same standard across communities, changing metadata practices over time and associated backwards compatibility, reconciling metadata created by data providers with standards, lack of community-accepted definitions for key terms (e.g. ';project'), integrating controlled vocabularies, and others. Metadata record ';validity' or compliance with a standard has been insufficient for interoperability. To overcome these challenges, both projects committed significant work to integrate and offer services over already 'standards compliant' metadata. Both efforts have shown that the 'human-in-the-loop' is still required to fulfill the lofty theoretical promises of metadata standards. In this talk, we 1) summarize the real-world experiences of two data discovery portals working with metadata in standard form, and 2) offer lessons learned for others who work with and rely on metadata and metadata standards.

  5. Cytometry metadata in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  6. Federating Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Baru, C.; Lin, K.

    2009-04-01

    The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US

  7. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    ERIC Educational Resources Information Center

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  8. Java Metadata Facility

    SciTech Connect

    Buttler, D J

    2008-03-06

    The Java Metadata Facility is introduced by Java Specification Request (JSR) 175 [1], and incorporated into the Java language specification [2] in version 1.5 of the language. The specification allows annotations on Java program elements: classes, interfaces, methods, and fields. Annotations give programmers a uniform way to add metadata to program elements that can be used by code checkers, code generators, or other compile-time or runtime components. Annotations are defined by annotation types. These are defined the same way as interfaces, but with the symbol {at} preceding the interface keyword. There are additional restrictions on defining annotation types: (1) They cannot be generic; (2) They cannot extend other annotation types or interfaces; (3) Methods cannot have any parameters; (4) Methods cannot have type parameters; (5) Methods cannot throw exceptions; and (6) The return type of methods of an annotation type must be a primitive, a String, a Class, an annotation type, or an array, where the type of the array is restricted to one of the four allowed types. See [2] for additional restrictions and syntax. The methods of an annotation type define the elements that may be used to parameterize the annotation in code. Annotation types may have default values for any of its elements. For example, an annotation that specifies a defect report could initialize an element defining the defect outcome submitted. Annotations may also have zero elements. This could be used to indicate serializability for a class (as opposed to the current Serializability interface).

  9. The New Online Metadata Editor for Generating Structured Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Shrestha, B.; Palanisamy, G.; Hook, L.; Killeffer, T.; Boden, T.; Cook, R. B.; Zolly, L.; Hutchison, V.; Frame, M. T.; Cialella, A. T.; Lazer, K.

    2014-12-01

    Nobody is better suited to "describe" data than the scientist who created it. This "description" about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [1] [2]. Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. How is OME helping Big Data Centers like ORNL DAAC? The ORNL DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers managed by the ESDIS Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and

  10. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  11. Metadata: A user`s view

    SciTech Connect

    Bretherton, F.P.; Singley, P.T.

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  12. Developing the CUAHSI Metadata Profile

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Bermudez, L.; Islam, S.; Beran, B.

    2004-12-01

    The Hydrologic Information System (HIS), of the Consortium of Universities for the Advancement of Hydrologic Science Inc., (CUAHSI), has as one of its goals to improve access to large volume, high quality, and heterogeneous hydrologic data sets. This will be attained in part by adopting a community metadata profile to achieve consistent descriptions that will facilitate data discovery. However, common standards are quite general in nature and typically lack domain specific vocabularies, complicating the adoption of standards for specific communities. We will show and demonstrate the problems encountered in the process of adopting ISO standards to create a CUAHSI metadata profile. The final schema is expressed in a simple metadata format, Metadata Template File (MTF), to leverage metadata annotations/viewer tools already developed by the San Diego Super Computer Center. The steps performed to create an MTF starting from ISO 19115:2003 are the following: 1) creation of ontologies using the Web Ontology Language (OWL) for ISO:19115 2003 and related ISO/TC 211 documents; 2) conceptualization in OWL of related hydrologic vocabularies such as NASA's Global Change Master Directory and units from the Hydrologic Handbook; 3) definition of CUAHSI profile by importing and extending the previous ontologies; 4) explicit creation of CUAHSI core set 5) export of the core set to MTF); 6) definition of metadata blocks for arbitrary digital objects (e.g. time series vs static-spatial data) using ISO's methodology for feature cataloguing; and 7) export of metadata blocks to MTF.

  13. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  14. HIS Central and the Hydrologic Metadata Catalog

    NASA Astrophysics Data System (ADS)

    Whitenack, T.; Zaslavsky, I.; Valentine, D. W.

    2008-12-01

    The CUAHSI Hydrologic Information System project maintains a comprehensive workflow for publishing hydrologic observations data and registering them to the common Hydrologic Metadata Catalog. Once the data are loaded into a database instance conformant with the CUAHSI HIS Observations Data Model (ODM), the user configures ODM web service template to point to the new database. After this, the hydrologic data become available via the standard CUAHSI HIS web service interface, that includes both data discovery (GetSites, GetVariables, GetSiteInfo, GetVariableInfo) and data retrieval (GetValues) methods. The observations data then can be further exposed via the global semantics-based search engine called Hydroseek. To register the published observations networks to the global search engine, users can now use the HIS Central application (new in HIS 1.1). With this online application, the WaterML-compliant web services can be submitted to the online catalog of data services, along with network metadata and a desired network symbology. Registering services to the HIS Central application triggers a harvester which uses the services to retrieve additional network metadata from the underlying ODM (information about stations, variables, and periods of record). The next step in HIS Central application is mapping variable names from the newly registered network, to the terms used in the global search ontology. Once these steps are completed, the new observations network is added to the map and becomes available for searching and querying. The number of observations network registered to the Hydrologic Metadata Catalog at SDSC is constantly growing. At the time of submission, the catalog contains 51 registered networks, with estimated 1.7 million stations.

  15. Making Metadata Better with CMR and MMT

    NASA Technical Reports Server (NTRS)

    Gilman, Jason Arthur; Shum, Dana

    2016-01-01

    Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.

  16. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    ERIC Educational Resources Information Center

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  17. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http

  18. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    NASA Astrophysics Data System (ADS)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  19. Separation of metadata and bulkdata to speed DICOM tag morphing

    NASA Astrophysics Data System (ADS)

    Ismail, Mahmoud; Ning, Yu; Philbin, James

    2014-03-01

    Most medical images are archived and transmitted using the DICOM format. The DICOM information model combines image pixel data and associated metadata into a single object. It is not possible to access the metadata separately from the pixel data. However, there are important use cases that only need access to metadata, and the DICOM format increases the running time of those use cases. Tag morphing is an example of one such use case. Tag or attribute morphing includes insertion, deletion, or modification of one or more of the metadata attributes in a study. It is typically used for order reconciliation on study acquisition or to localize the Issuer of Patient ID and the Patient ID attributes when data from one Medical Record Number (MRN) domain is transferred to or displayed in a different domain. This work uses the Multi-Series DICOM (MSD) format to reduce the time required for tag morphing. The MSD format separates metadata from pixel data, and at the same time eliminates duplicate attributes. MSD stores studies using two files rather than in many single frame files typical of DICOM. The first file contains the de-duplicated study metadata, and the second contains pixel data and other bulkdata. A set of experiments were performed where metadata updates were applied to a set of DICOM studies stored in both the traditional Single Frame DICOM (SFD) format and the MSD format. The time required to perform the updates was recorded for each format. The results show that tag morphing is, on average, more than eight times faster in MSD format.

  20. Metadata based mediator generation

    SciTech Connect

    Critchlow, T

    1998-03-01

    Mediators are a critical component of any data warehouse, particularly one utilizing partially materialized views; they transform data from its source format to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases, requires a mediator to be updated whenever an associated schema is modified. This maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. However, failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the m current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. This paper describes a metadata framework, and associated software designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting the schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  1. Semantic Representation of Temporal Metadata in a Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Wang, H.; Rozell, E. A.; West, P.; Zednik, S.; Fox, P. A.

    2011-12-01

    The Virtual Solar-Terrestrial Observatory (VSTO) Portal at vsto.org provides a set of guided workflows to implement use cases designed solar-terrestrial physics and upper atmospheric science. Semantics are used in VSTO to model abstract instrument and parameter classifications, providing data access to users without extended domain specific vocabularies. The temporal restrictions used in the workflows are currently possible via RESTful services made to a remote system with access to a SQL-based metadata catalog. In order to provide a greater range of temporal reasoning and search capabilities for the user, we propose an alternative architecture design for the VSTO Portal, where the temporal metadata is integrated in the domain ontology. We achieve this integration by converting temporal metadata from the headers of raw data files into RDF using the OWL-Time vocabulary. This presentation covers our work with semantic temporal metadata, including: our representation using OWL-Time, issues that we have faced in persistent storage, and performance and scalability of semantic query. We conclude with discussions of the significance semantic temporal metadata has in virtual observatories.

  2. Recording and tribological properties of CoNi magnetic films on chemically textured aluminum rigid disk substrates (abstract)

    NASA Astrophysics Data System (ADS)

    Tsuya, N.; Tokushima, T.; Hirayama, Y.; Oka, Y.

    1991-04-01

    In a rigid disk a very smooth surface is desirable for high density recording, while it tends to stick to the magnetic head. To avoid this difficulty, the mechanical texturing (M/T) is widely used. Unfortunately very low flying height can't be achieved with the M/T. To improve the flying height, authors have developed a new texturing process using anodically oxidized aluminum substrates named chemical texturing (C/T).1 Aluminum anodic oxide films have a regularly arranged honeycomb structure and uniform and roughness-controlled surfaces were formed by etching process of chemical texturing. In the present research, the relation between the recording and tribological properties and the etching conditions were investigated. On C/T substrates Cr, a longitudinal magnetic layer CoNi, C were sputtered in an inline sputtering equipment. The surface of the sputtered layer was flat (Ra<5 Å) and uniform. Magnetic and electrical properties (coercive force, squareness, over write, modulation and so on) were examined. In spite of isotropy on the disk surfaces, the modulation caused by the inline sputtering was not observed, and high coercive force of 1200 Oe was obtained. Tribological properties (gride height, CSS, friction) were measured. Gride height was lower than 0.1 μm, and CSS more than 30 000 cycles. In semi-pilot plant production, thousands of C/T disks were prepared. Yield of disks having less than 5 missing and/or extra pulses was higher than 95%.

  3. Fresh Wounds: Metadata and Usability Lessons from building the Earthdata Search Client

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Murphy, K. J.; Baynes, K.

    2014-12-01

    Data discovery and accessibility are frequent topics in science conferences but are usually discussed in an abstract XML schema kind-of way. In the course of designing and building the NASA Earthdata Search Client, a "concept-car" discovery client for the new Common Metadata Repository (CMR) and NASA Earthdata, we learned important lessons about usability from user studies and our actual use of science metadata. In this talk we challenge the community with the issues we ran into: the critical usability stumbling blocks for even seasoned researchers, "bug reports" from users that were ultimately usability problems in metadata, the challenges and questions that arise from incorporating "visual metadata", and the state of data access services. We intend to show that high quality metadata and real human usability factors are essential to making critical data accessible.

  4. Metadata-Centric Discovery Service

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chung, N. T.; Gangl, M. E.; Armstrong, E. M.

    2011-12-01

    It is data about data. It is the information describing a picture without looking at the picture. Through the years, the Earth Science community seeks better methods to describe science artifacts to improve the quality and efficiency in information exchange. One the purposes are to provide information to the users to guide them into identifies the science artifacts of their interest. The NASA Distributed Active Archive Centers (DAACs) are the building blocks of a data centric federation, designed for processing and archiving from NASA's Earth Observation missions and their distribution as well as provision of specialized services to users. The Physical Oceanography Distributed Active Archive Center (PO.DAAC), at the Jet Propulsion Laboratory, archives and distributes science artifacts pertain to the physical state of the ocean. As part of its high-performance operational Data Management and Archive System (DMAS) is a fast data discovery RESTful web service called the Oceanographic Common Search Interface (OCSI). The web service searches and delivers metadata on all data holdings within PO.DAAC. Currently OCSI supports metadata standards such as ISO-19115, OpenSearch, GCMD, and FGDC, with new metadata standards still being added. While we continue to seek the silver bullet in metadata standard, the Earth Science community is in fact consists of various standards due to the specific needs of its users and systems. This presentation focuses on the architecture behind OCSI as a reference implementation on building a metadata-centric discovery service.

  5. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    SciTech Connect

    Devarakonda, Ranjeet; Shrestha, Biva; Palanisamy, Giri; Hook, Leslie A; Killeffer, Terri S; Boden, Thomas A; Cook, Robert B; Zolly, Lisa; Hutchison, Viv; Frame, Mike; Cialella, Alice; Lazer, Kathy

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  6. Italian Polar Metadata System

    NASA Astrophysics Data System (ADS)

    Longo, S.; Nativi, S.; Leone, C.; Migliorini, S.; Mazari Villanova, L.

    2012-04-01

    Italian Polar Metadata System C.Leone, S.Longo, S.Migliorini, L.Mazari Villanova, S. Nativi The Italian Antarctic Research Programme (PNRA) is a government initiative funding and coordinating scientific research activities in polar regions. PNRA manages two scientific Stations in Antarctica - Concordia (Dome C), jointly operated with the French Polar Institute "Paul Emile Victor", and Mario Zucchelli (Terra Nova Bay, Southern Victoria Land). In addition National Research Council of Italy (CNR) manages one scientific Station in the Arctic Circle (Ny-Alesund-Svalbard Islands), named Dirigibile Italia. PNRA started in 1985 with the first Italian Expedition in Antarctica. Since then each research group has collected data regarding biology and medicine, geodetic observatory, geophysics, geology, glaciology, physics and atmospheric chemistry, earth-sun relationships and astrophysics, oceanography and marine environment, chemistry contamination, law and geographic science, technology, multi and inter disciplinary researches, autonomously with different formats. In 2010 the Italian Ministry of Research assigned the scientific coordination of the Programme to CNR, which is in charge of the management and sharing of the scientific results carried out in the framework of the PNRA. Therefore, CNR is establishing a new distributed cyber(e)-infrastructure to collect, manage, publish and share polar research results. This is a service-based infrastructure building on Web technologies to implement resources (i.e. data, services and documents) discovery, access and visualization; in addition, semantic-enabled functionalities will be provided. The architecture applies the "System of Systems" principles to build incrementally on the existing systems by supplementing but not supplanting their mandates and governance arrangements. This allows to keep the existing capacities as autonomous as possible. This cyber(e)-infrastructure implements multi-disciplinary interoperability following

  7. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  8. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  9. Orthopyroxene as a recorder of lunar crust evolution: An ion microprobe investigations of Mg-suite norites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Papike, J. J.; Fowler, G. W.; Shearer, C. K.

    1994-01-01

    The lunar Mg suite, which includes dunites, troctolites, and norites, could make up 20-30% of the Moon's crust down to a depth of 60 km. The remainder is largely anorthositic. This report focuses on norites because we have found that the chemical characteristics of orthopyroxene are effective recorders of their parental melt compositions. Many of the samples representing the Mg suite are small and unrepresentative. In addition, they are cumulates and thus are difficult to study by whole-rock techniques. Therefore, we decided to study these rocks by SIMS techniques to analyze a suite of trace elements in orthopyroxene. The 12 norite samples were selected from a recent compilation by Warren who attempted to select the best candidate samples from the standpoint of their pristine character. Our present database includes greater than 300 superior Electromagnetic Pulse (EMP) analyses and greater than 50 scanning ion mass spectroscopy (SIMS) analyses for 8 Rare Earth Elements (REE), Zr, Y, and Sr. The Mg#s for the parental melts calculated from Mg#s in orthopyroxene show that most melts have Mg#s in the range of 0.36-0.60. This compares with a range of Mg#s for lunar volcanic picritic glass beads of 0.4-0.68. Therefore, although the cumulate whole-rock compositions of the Mg suite can be extremely magnesian, the calculated parental melts are not anomalously high in Mg. A chemical characteristic of the Mg-suite norites that is more difficult to explain is the high KREEP content of the calculated parental melts. The REE contents for the calculated norite parental melts have REE that match or exceed the high-K KREEP component of Warren. Therefore, mixing of a KREEP component and a picritic melt cannot, by itself, explain the high estimated REE contents of the melts parental to norites. Advanced crystallization following KREEP incorporation, especially of plagiclase, may also be required.

  10. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    USGS Publications Warehouse

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  11. Abstract Painting

    ERIC Educational Resources Information Center

    Henkes, Robert

    1978-01-01

    Abstract art provokes numerous interpretations, and as many misunderstandings. The adolescent reaction is no exception. The procedure described here can help the student to understand the abstract from at least one direction. (Author/RK)

  12. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  13. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  14. Discovering Physical Samples Through Identifiers, Metadata, and Brokering

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Hills, D. J.; Jenkyns, R.

    2015-12-01

    Physical samples, particularly in the geosciences, are key to understanding the Earth system, its history, and its evolution. Our record of the Earth as captured by physical samples is difficult to explain and mine for understanding, due to incomplete, disconnected, and evolving metadata content. This is further complicated by differing ways of classifying, cataloguing, publishing, and searching the metadata, especially when specimens do not fit neatly into a single domain—for example, fossils cross disciplinary boundaries (mineral and biological). Sometimes even the fundamental classification systems evolve, such as the geological time scale, triggering daunting processes to update existing specimen databases. Increasingly, we need to consider ways of leveraging permanent, unique identifiers, as well as advancements in metadata publishing that link digital records with physical samples in a robust, adaptive way. An NSF EarthCube Research Coordination Network (RCN) called the Internet of Samples (iSamples) is now working to bridge the metadata schemas for biological and geological domains. We are leveraging the International Geo Sample Number (IGSN) that provides a versatile system of registering physical samples, and working to harmonize this with the DataCite schema for Digital Object Identifiers (DOI). A brokering approach for linking disparate catalogues and classification systems could help scale discovery and access to the many large collections now being managed (sometimes millions of specimens per collection). This presentation is about our community building efforts, research directions, and insights to date.

  15. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org.

  16. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  17. Metadata management for high content screening in OMERO

    PubMed Central

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.

    2016-01-01

    High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  18. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie

    2006-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://qcrnd.nasa.qov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage. The directory also offers data holders a means to post and search their data through customized portals, i.e. online customized subset metadata directories. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1994. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as a coordinating node of the International Directory Network (IDN), has been active at the international, regional and national level for many years through its involvement with the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and partnerships (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP), sharing experience, knowledge related to metadata and/or data management and interoperability.

  19. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  20. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  1. Finding Atmospheric Composition (AC) Metadata

    NASA Technical Reports Server (NTRS)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  2. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet

  3. MPEG-7: standard metadata for multimedia content

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    The eXtensible Markup Language (XML) metadata technology of describing media contents has emerged as a dominant mode of making media searchable both for human and machine consumptions. To realize this premise, many online Web applications are pushing this concept to its fullest potential. However, a good metadata model does require a robust standardization effort so that the metadata content and its structure can reach its maximum usage between various applications. An effective media content description technology should also use standard metadata structures especially when dealing with various multimedia contents. A new metadata technology called MPEG-7 content description has merged from the ISO MPEG standards body with the charter of defining standard metadata to describe audiovisual content. This paper will give an overview of MPEG-7 technology and what impact it can bring forth to the next generation of multimedia indexing and retrieval applications.

  4. Evaluating the privacy properties of telephone metadata.

    PubMed

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C

    2016-05-17

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  5. Evaluating the privacy properties of telephone metadata

    PubMed Central

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  6. Metadata Objects for Linking the Environmental Sciences (MOLES)

    NASA Astrophysics Data System (ADS)

    Lawrence, B.; Cox, S.; Ventouras, S.

    2009-04-01

    MOLES is an information model that provides a framework to support interdisciplinary contextual metadata describing instruments, observation platforms, activities, calibrations and other aspects of the environment associated with observations and simulations. MOLES has been designed as a bridge between discovery metadata - the conventional stuff of catalogues - and the sort of metadata which scientists traditionally store alongside data within files (and more rarely, databases) - "header files" and the like. MOLES can also be thought of as both a metadata structure in it's own right, and a framework for describing and recording the relationships between aspects of the context described in other more metadata formats (such as SensorML and the upcoming Metafor Common Information Model). MOLES was originally conceived of during the first NERC DataGrid project, in 2002, and is now at V3 in 2009. V3 differs from previous versions in many significant ways: 1) it has been designed in ISO 19103 compliant UML, and an XML schema implementation is delivered via an automated implementation of the ISO19118/19136 model driven architecture. 2) it is designed to operate in Web2.0 environment with both an atom serialisation and an OGC Web Feature Service (WFS) friendly XML serialisation. 3) it leverages the OGC observations and measurements specification, complements a range of GML application schema (in particular GeoSciML and CSML), and supports export of a subset of information in ISO 19115/19139 compliance. A software implementation exploiting MOLES V3 is under development. This will be seeded with hundreds of enties available from the MOLES V2 service currently deployed in the STFC Centre for Environmental Data Archival.

  7. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie F.

    2007-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://gcmd.nasa.gov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches, or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage, and data resolution for selected datasets. The directory also offers data holders a means to advertise and search their data through customized portals, which are subset views of the directory. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1988. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as the International Directory Network (IDN), has been providing its services and sharing its experience and knowledge of metadata at the international, national, regional, and local level for many years. Active partners include the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and organizations (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP).

  8. RESTful Access to NOAA's Space Weather Data and Metadata

    NASA Astrophysics Data System (ADS)

    Kihn, E. A.; Elespuru, P. R.; Zhizhin, M.

    2010-12-01

    The Space Physics Interactive Data Resource (SPIDR) (http://spidr.ngdc.noaa.gov) is a web based application for searching, accessing and interacting with NOAA’s space related data holdings. SPIDR serves as one of several interfaces to the National Geophysical Data Center's archived digital holdings. The SPIDR system while successful in delivering data and visualization to clients was also found to be limited in its ability to interact with other programs, its ability to integrate with alternate work-flows and its support for multiple user interfaces (UI). As such in 2006 the SPIDR development team implemented a SOAP based interface to SPIDR through which outside developers could make use of the resource. It was our finding however that despite our best efforts at documentation, the interface remained elusive to many users. That is to say a few strong programmers were able to format and use the XML messaging but in general it did not make the data more accessible. In response SPIDR has been extended to include a REST style web services API for all time series data. This provides direct, synchronous, simple programmatic access to over 200 individual parameters representing space weather data directly from the NGDC archive. In addition to the data service SPIDR has implemented a metadata service which allows users to get Federal Geographic Data Committee (FGDC )style metadata records describing all available data and stations. This metadata will migrate to the NASA Space Physics Archive Search and Extract ( SPASE) style in future versions in order to provide further detail. The combination of data, metadata and visualization tools available through SPIDR combine to make it a powerful virtual observatory (VO). When this is combined with a content rich metadata system we have experience vastly greater user response and usage This talk will present details of the development as well as lessons learned from 10 years of SPIDR development.

  9. Logic programming and metadata specifications

    NASA Technical Reports Server (NTRS)

    Lopez, Antonio M., Jr.; Saacks, Marguerite E.

    1992-01-01

    Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.

  10. Research Abstracts.

    ERIC Educational Resources Information Center

    Plotnick, Eric

    2001-01-01

    Presents research abstracts from the ERIC Clearinghouse on Information and Technology. Topics include: classroom communication apprehension and distance education; outcomes of a distance-delivered science course; the NASA/Kennedy Space Center Virtual Science Mentor program; survey of traditional and distance learning higher education members;…

  11. Abstract Constructions.

    ERIC Educational Resources Information Center

    Pietropola, Anne

    1998-01-01

    Describes a lesson designed to culminate a year of eighth-grade art classes in which students explore elements of design and space by creating 3-D abstract constructions. Outlines the process of using foam board and markers to create various shapes and optical effects. (DSK)

  12. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  13. Better Living Through Metadata: Examining Archive Usage

    NASA Astrophysics Data System (ADS)

    Becker, G.; Winkelman, S.; Rots, A.

    2013-10-01

    The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development. The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years; however, it is overdue for an update. We will detail changes we hope to effect and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web “crawlers” and other scripted access methods on the archive. With the improvements to our download tracking we hope to gain a better understanding of the dissemination of Chandra's data; how effectively it is being done; and perhaps discover ideas for new services.

  14. Data Model and Relational Database Design for Highway Runoff Water-Quality Metadata

    USGS Publications Warehouse

    Granato, Gregory E.; Tessler, Steven

    2001-01-01

    A National highway and urban runoff waterquality metadatabase was developed by the U.S. Geological Survey in cooperation with the Federal Highway Administration as part of the National Highway Runoff Water-Quality Data and Methodology Synthesis (NDAMS). The database was designed to catalog available literature and to document results of the synthesis in a format that would facilitate current and future research on highway and urban runoff. This report documents the design and implementation of the NDAMS relational database, which was designed to provide a catalog of available information and the results of an assessment of the available data. All the citations and the metadata collected during the review process are presented in a stratified metadatabase that contains citations for relevant publications, abstracts (or previa), and reportreview metadata for a sample of selected reports that document results of runoff quality investigations. The database is referred to as a metadatabase because it contains information about available data sets rather than a record of the original data. The database contains the metadata needed to evaluate and characterize how valid, current, complete, comparable, and technically defensible published and available information may be when evaluated for application to the different dataquality objectives as defined by decision makers. This database is a relational database, in that all information is ultimately linked to a given citation in the catalog of available reports. The main database file contains 86 tables consisting of 29 data tables, 11 association tables, and 46 domain tables. The data tables all link to a particular citation, and each data table is focused on one aspect of the information collected in the literature search and the evaluation of available information. This database is implemented in the Microsoft (MS) Access database software because it is widely used within and outside of government and is familiar to many

  15. A Dynamic Metadata Community Profile for CUAHSI

    NASA Astrophysics Data System (ADS)

    Bermudez, L.; Piasecki, M.

    2004-12-01

    Common Metadata standards typically lack of domain specific elements, have limited extensibility and do not always resolve semantic heterogeneities that could occur in the annotations. To facilitate the use and extension of metadata specifications a methodology called Dynamic Community Profiles, DCP, is presented. The methodology allows to overwrite elements definitions and to specify core elements as metadata tree paths. DCP uses the Web Ontology Language (OWL), the Resource Description Framework (RDF) and XML syntax to formalize specifications and to create controlled vocabularies in ontologies, which enhances interoperability. This methodology was employed to create a metadata profile for the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI). The profile was created by extending ISO-19115:2003 geographic metadata standard and restricting the permissible values of some elements. The values used as controlled vocabularies were inferred from hydrologic keywords found in the Global Change Master Directory (GCMD) and from measurement units found in the Hydrologic Handbook. Also, a core metadata set for CUAHSI was formally expressed as tree paths, containing the ISO core set plus additional elements. Finally a tool was developed to test the extension and to allow creation of metadata instances in RDF/XML which conforms to the profile. Also this tool is able to export the core elements to other schema formats such as Metadata Template Files (MTF).

  16. Mapping Entry Vocabulary to Unfamiliar Metadata Vocabularies.

    ERIC Educational Resources Information Center

    Buckland, Michael; Chen, Aitao; Chen, Hui-Min; Kim, Youngin; Lam, Byron; Larson, Ray; Norgard, Barbara; Purat, Jacek; Gey, Frederic

    1999-01-01

    Reports on work at the University of California, Berkeley, on the design and development of English-language indices to metadata vocabularies. Discusses the significance of unfamiliar metadata and describes the Entry Vocabulary Module which helps searchers to be more effective and increases the return on the original investment in generating…

  17. Leveraging Metadata to Create Better Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  18. Digital Initiatives and Metadata Use in Thailand

    ERIC Educational Resources Information Center

    SuKantarat, Wichada

    2008-01-01

    Purpose: This paper aims to provide information about various digital initiatives in libraries in Thailand and especially use of Dublin Core metadata in cataloguing digitized objects in academic and government digital databases. Design/methodology/approach: The author began researching metadata use in Thailand in 2003 and 2004 while on sabbatical…

  19. A Metadata-Rich File System

    SciTech Connect

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  20. Experiences Using a Meta-Data Based Integration Infrastructure

    SciTech Connect

    Critchlow, T.; Masick, R.; Slezak, T.

    1999-07-08

    A data warehouse that presents data from many of the genomics community data sources in a consistent, intuitive fashion has long been a goal of bioinformatics. Unfortunately, it is one of the goals that has not yet been achieved. One of the major problems encountered by previous attempts has been the high cost of creating and maintaining a warehouse in a dynamic environment. In this abstract we have outlined a meta-data based approach to integrating data sources that begins to address this problem. We have used this infrastructure to successfully integrate new sources into an existing warehouse in substantially less time than would have traditionally been required--and the resulting mediators are more maintainable than the traditionally defined ones would have been. In the final paper, we will describe in greater detail both our architecture and our experiences using this framework. In particular, we will outline the new, XML based representation of the meta-data, describe how the mediator generator works, and highlight other potential uses for the meta-data.

  1. METADATA REGISTRY, ISO/IEC 11179

    SciTech Connect

    Pon, R K; Buttler, D J

    2008-01-03

    ISO/IEC-11179 is an international standard that documents the standardization and registration of metadata to make data understandable and shareable. This standardization and registration allows for easier locating, retrieving, and transmitting data from disparate databases. The standard defines the how metadata are conceptually modeled and how they are shared among parties, but does not define how data is physically represented as bits and bytes. The standard consists of six parts. Part 1 provides a high-level overview of the standard and defines the basic element of a metadata registry - a data element. Part 2 defines the procedures for registering classification schemes and classifying administered items in a metadata registry (MDR). Part 3 specifies the structure of an MDR. Part 4 specifies requirements and recommendations for constructing definitions for data and metadata. Part 5 defines how administered items are named and identified. Part 6 defines how administered items are registered and assigned an identifier.

  2. Collection Metadata Solutions for Digital Library Applications

    NASA Technical Reports Server (NTRS)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  3. Incorporating ISO Metadata Using HDF Product Designer

    NASA Technical Reports Server (NTRS)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  4. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  5. Metadata-driven Ad Hoc Query of Patient Data

    PubMed Central

    Deshpande, Aniruddha M.; Brandt, Cynthia; Nadkarni, Prakash M.

    2002-01-01

    Clinical study data management systems (CSDMSs) have many similarities to clinical patient record systems (CPRSs) in their focus on recording clinical parameters. Requirements for ad hoc query interfaces for both systems would therefore appear to be highly similar. However, a clinical study is concerned primarily with collective responses of groups of subjects to standardized therapeutic interventions for the same underlying clinical condition. The parameters that are recorded in CSDMSs tend to be more diverse than those required for patient management in non-research settings, because of the greater emphasis on questionnaires for which responses to each question are recorded separately. The differences between CSDMSs and CPRSs are reflected in the metadata that support the respective systems' operation, and need to be reflected in the query interfaces. The authors describe major revisions of their previously described CSDMS ad hoc query interface to meet CSDMS needs more fully, as well as its porting to a Web-based platform. PMID:12087118

  6. MPEG-7 meta-data enhanced encoder system for embedded systems

    NASA Astrophysics Data System (ADS)

    Asai, Kohtaro; Nishikawa, Hirofumi; Kudo, Daiki; Divakaran, Ajay

    2004-01-01

    We describe a MPEG-7 Meta-Data enhanced Audio-Visual Encoder system that targets DVD recorders. We extract features in the compressed domain with both video and audio, which allows us to add the meta-data extraction without altering the hardware architecture of the encoder core. Our feature extraction algorithms are simple, and thus implementable through a simple combination of software and hardware on the integrated DVD chip. The primary application of the meta-data is video summarization, which enables rapid browsing of stored video by the end user. The simplicity of our summarization and feature extraction algorithms enables incorporation of the powerful functionality of smart content navigation through content summarization, into the DVD recorder at a low cost.

  7. eXtended MetaData Registry

    2006-10-25

    The purpose of the eXtended MetaData Registry (XMDR) prototype is to demonstrate the feasibility and utility of constructing an extended metadata registry, i.e., one which encompasses richer classification support, facilities for including terminologies, and better support for formal specification of semantics. The prototype registry will also serve as a reference implementation for the revised versions of ISO 11179, Parts 2 and 3 to help guide production implementations.

  8. Science friction: data, metadata, and collaboration.

    PubMed

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  9. ncISO Facilitating Metadata and Scientific Data Discovery

    NASA Astrophysics Data System (ADS)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  10. Automated diagnosis of data-model conflicts using metadata.

    PubMed

    Chen, R O; Altman, R B

    1999-01-01

    The authors describe a methodology for helping computational biologists diagnose discrepancies they encounter between experimental data and the predictions of scientific models. The authors call these discrepancies data-model conflicts. They have built a prototype system to help scientists resolve these conflicts in a more systematic, evidence-based manner. In computational biology, data-model conflicts are the result of complex computations in which data and models are transformed and evaluated. Increasingly, the data, models, and tools employed in these computations come from diverse and distributed resources, contributing to a widening gap between the scientist and the original context in which these resources were produced. This contextual rift can contribute to the misuse of scientific data or tools and amplifies the problem of diagnosing data-model conflicts. The authors' hypothesis is that systematic collection of metadata about a computational process can help bridge the contextual rift and provide information for supporting automated diagnosis of these conflicts. The methodology involves three major steps. First, the authors decompose the data-model evaluation process into abstract functional components. Next, they use this process decomposition to enumerate the possible causes of the data-model conflict and direct the acquisition of diagnostically relevant metadata. Finally, they use evidence statically and dynamically generated from the metadata collected to identify the most likely causes of the given conflict. They describe how these methods are implemented in a knowledge-based system called GRENDEL and show how GRENDEL can be used to help diagnose conflicts between experimental data and computationally built structural models of the 30S ribosomal subunit. PMID:10495098

  11. Sea Level Station Metadata for Tsunami Detection, Warning and Research

    NASA Astrophysics Data System (ADS)

    Stroker, K. J.; Marra, J.; Kari, U. S.; Weinstein, S. A.; Kong, L.

    2007-12-01

    -priority metadata requirements identified at a water level workshop held at the XXIV IUGG Meeting in Perugia will be addressed: consistent, validated, and well defined numbers (e.g. amplitude); exact location of sea level stations; a complete record of sea level data stored in the archive; identifying high-priority sea level stations; and consistent definitions. NOAA's National Geophysical Data Center (NGDC) and co-located World Data Center for Solid Earth Geophysics (including tsunamis) would hold the archive of the sea level station data and distribute the standard metadata. Currently, NGDC is also archiving and distributing the DART buoy deep-ocean water level data and metadata in standards based formats. Kari, Uday S., John J. Marra, Stuart A. Weinstein, 2006 A Tsunami Focused Data Sharing Framework For Integration of Databases that Describe Water Level Station Specifications. AGU Fall Meeting, 2006. San Francisco, California. Marra, John, J., Uday S. Kari, and Stuart A. Weinstein (in press). A Tsunami Detection and Warning-focused Sea Level Station Metadata Web Service. IUGG XXIV, July 2-13, 2007. Perugia, Italy.

  12. Mercury-metadata data management system

    SciTech Connect

    2008-01-03

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  13. Mercury-metadata data management system

    2008-01-03

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facettedmore » type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.« less

  14. Omics Metadata Management Software v. 1 (OMMS)

    2013-09-09

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories,more » or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installed and run by operators with general system administration and scripting language literacy.« less

  15. Omics Metadata Management Software v. 1 (OMMS)

    SciTech Connect

    2013-09-09

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installed and run by operators with general system administration and scripting language literacy.

  16. Scientific Workflows + Provenance = Better (Meta-)Data Management

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.; Cuevas-Vicenttín, V.; Missier, P.; Dey, S.; Kianmajd, P.; Wei, Y.; Koop, D.; Chirigati, F.; Altintas, I.; Belhajjame, K.; Bowers, S.

    2013-12-01

    The origin and processing history of an artifact is known as its provenance. Data provenance is an important form of metadata that explains how a particular data product came about, e.g., how and when it was derived in a computational process, which parameter settings and input data were used, etc. Provenance information provides transparency and helps to explain and interpret data products. Other common uses and applications of provenance include quality control, data curation, result debugging, and more generally, 'reproducible science'. Scientific workflow systems (e.g. Kepler, Taverna, VisTrails, and others) provide controlled environments for developing computational pipelines with built-in provenance support. Workflow results can then be explained in terms of workflow steps, parameter settings, input data, etc. using provenance that is automatically captured by the system. Scientific workflows themselves provide a user-friendly abstraction of the computational process and are thus a form of ('prospective') provenance in their own right. The full potential of provenance information is realized when combining workflow-level information (prospective provenance) with trace-level information (retrospective provenance). To this end, the DataONE Provenance Working Group (ProvWG) has developed an extension of the W3C PROV standard, called D-PROV. Whereas PROV provides a 'least common denominator' for exchanging and integrating provenance information, D-PROV adds new 'observables' that described workflow-level information (e.g., the functional steps in a pipeline), as well as workflow-specific trace-level information ( timestamps for each workflow step executed, the inputs and outputs used, etc.) Using examples, we will demonstrate how the combination of prospective and retrospective provenance provides added value in managing scientific data. The DataONE ProvWG is also developing tools based on D-PROV that allow scientists to get more mileage from provenance metadata

  17. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  18. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    NASA Astrophysics Data System (ADS)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE

  19. Multimedia Learning Systems Based on IEEE Learning Object Metadata (LOM).

    ERIC Educational Resources Information Center

    Holzinger, Andreas; Kleinberger, Thomas; Muller, Paul

    One of the "hottest" topics in recent information systems and computer science is metadata. Learning Object Metadata (LOM) appears to be a very powerful mechanism for representing metadata, because of the great variety of LOM Objects. This is on of the reasons why the LOM standard is repeatedly cited in projects in the field of eLearning Systems.…

  20. Enhancing SCORM Metadata for Assessment Authoring in E-Learning

    ERIC Educational Resources Information Center

    Chang, Wen-Chih; Hsu, Hui-Huang; Smith, Timothy K.; Wang, Chun-Chia

    2004-01-01

    With the rapid development of distance learning and the XML technology, metadata play an important role in e-Learning. Nowadays, many distance learning standards, such as SCORM, AICC CMI, IEEE LTSC LOM and IMS, use metadata to tag learning materials. However, most metadata models are used to define learning materials and test problems. Few…

  1. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  2. The International Learning Object Metadata Survey

    ERIC Educational Resources Information Center

    Friesen, Norm

    2004-01-01

    A wide range of projects and organizations is currently making digital learning resources (learning objects) available to instructors, students, and designers via systematic, standards-based infrastructures. One standard that is central to many of these efforts and infrastructures is known as Learning Object Metadata (IEEE 1484.12.1-2002, or LOM).…

  3. Tracking Actual Usage: The Attention Metadata Approach

    ERIC Educational Resources Information Center

    Wolpers, Martin; Najjar, Jehad; Verbert, Katrien; Duval, Erik

    2007-01-01

    The information overload in learning and teaching scenarios is a main hindering factor for efficient and effective learning. New methods are needed to help teachers and students in dealing with the vast amount of available information and learning material. Our approach aims to utilize contextualized attention metadata to capture behavioural…

  4. DIRAC File Replica and Metadata Catalog

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Poss, S.

    2012-12-01

    File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.

  5. Digital Preservation and Metadata: History, Theory, Practice.

    ERIC Educational Resources Information Center

    Lazinger, Susan S.

    This book addresses critical issues of digital preservation, providing guidelines for protecting resources from dealing with obsolescence, to responsibilities, methods of preservation, cost, and metadata formats. It also shows numerous national and international institutions that provide frameworks for digital libraries and archives. The first…

  6. Metadata management and semantics in microarray repositories.

    PubMed

    Kocabaş, F; Can, T; Baykal, N

    2011-12-01

    The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework. PMID:24052712

  7. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  8. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    SciTech Connect

    Devarakonda, Ranjeet

    2008-01-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible

  9. Mercury: An Example of Effective Software Reuse for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2008-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. Though originally developed for NASA, the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the 12 projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects. To balance these common and project-specific needs, Mercury's architecture has three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of project specific configuration files. The harvested files are structured metadata records that are indexed against the search library API consistently, so that it can render various search capabilities such as simple, fielded, spatial and temporal. This backend component is supported by a very flexible

  10. Air Quality Community Catalog and Rich Metadata for GEOSS

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Husar, R. B.; Falke, S. R.; Habermann, R. E.

    2009-04-01

    The GEOSS Air Quality Community of Practice (CoP) is developing a community catalog and community portals that will facilitate the discovery, access and usage of distributed air quality data. The catalog records contain fields common for all datasets, additional fields using ISO 19115 and a link to DataSpaces for additional, community-contributed metadata. Most fields for data discovery will be extracted from the OGC WMS/WCS GetCapabilities file. DataSpaces, wiki-based web pages, willinclude extended metadata, lineage and information for better understanding of the data. The value of the DataSpaces comes from the ability to connect the dataset community: users, mediators and providers through user feedback, discussion and other community contributed content. The community catalog will be harvested through the GEOSS Common Infrastructure (GCI) and the GEO and community portals will facilitate finding and distributing AQ datasets. The Air Quality Community Catalog and Portal components are currently being tested as part of the GEOSS Architecture Implementation Pilot - II (AIP-II).

  11. Metadata for WIS and WIGOS: GAW Profile of ISO19115 and Draft WIGOS Core Metadata Standard

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Howe, Brian

    2014-05-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority to underpin all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). The development of the WIGOS Operational Information Resource (WIR) is central to the WIGOS Framework Implementation Plan (WIGOS-IP). The WIR shall provide information on WIGOS and its observing components, as well as requirements of WMO application areas. An important aspect is the description of the observational capabilities by way of structured metadata. The Global Atmosphere Watch is the WMO program addressing the chemical composition and selected physical properties of the atmosphere. Observational data are collected and archived by GAW World Data Centres (WDCs) and related data centres. The Task Team on GAW WDCs (ET-WDC) have developed a profile of the ISO19115 metadata standard that is compliant with the WMO Information System (WIS) specification for the WMO Core Metadata Profile v1.3. This profile is intended to harmonize certain aspects of the documentation of observations as well as the interoperability of the WDCs. The Inter-Commission-Group on WIGOS (ICG-WIGOS) has established the Task Team on WIGOS Metadata (TT-WMD) with representation of all WMO Technical Commissions and the objective to define the WIGOS Core Metadata. The result of this effort is a draft semantic standard comprising of a set of metadata classes that are considered to be of critical importance for the interpretation of observations relevant to WIGOS. The purpose of the presentation is to acquaint the audience with the standard and to solicit informal feed-back from experts in the various disciplines of meteorology and climatology. This feed-back will help ET-WDC and TT-WMD to refine the GAW metadata profile and the draft WIGOS metadata standard, thereby increasing their utility and acceptance.

  12. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    ERIC Educational Resources Information Center

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  13. Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications

    PubMed Central

    Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R.; Chen, Rui; Choiniere, John; Dearth, Stephen P.; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H.; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W.; Kohane, Isaac S.; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V.; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H.; Warnich, Louise; Wilhelm, Steven W.; Yandl, Gregory

    2014-01-01

    Abstract Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement. PMID:24456465

  14. The SCEDC Seismic Station Information System software: Database for Populating, Archiving, and Distributing Seismic Station Metadata

    NASA Astrophysics Data System (ADS)

    Chowdhury, F. R.; Yu, E.; Hauksson, E.; Given, D.; Thomas, V. I.; Clayton, R. W.

    2010-12-01

    The Station Information System (SIS) is a database-driven system used by the Southern California Seismic Network (SCSN) to store and distribute station metadata. This abstract concerns the User Interface portion of SIS. The User Interface (UI) is a web application that enables authenticated users to view and edit the station metadata. New features have recently been added to SIS in order to facilitate station upgrades for the American Recovery and Reinvestment Act (ARRA). In particular, SIS now stores an extended range of metadata that encompasses not only station/channel response information but also operations and telemetry information. Typical activities for stations operation that are handled by SIS include powersystem upgrades, and inventory management such as tracking ownership and firmware settings. The UI also enables users to track and report instrumentation problems that may affect the station/channel response. In addition, progress has been made in developing a framework for storing station telemetry path information. The accelerated ARRA station upgrade schedule requires SCSN operators to log more than one response change per day in SIS; the updated station metadata is automatically generated and available for use by the ANSS Quake Monitoring System (AQMS).

  15. Problems in the Preservation of Electronic Records.

    ERIC Educational Resources Information Center

    Lin, Lim Siew; Ramaiah, Chennupati K.; Wal, Pitt Kuan

    2003-01-01

    Discusses issues related to the preservation of electronic records. Highlights include differences between physical and electronic records; volume of electronic records; physical media; authenticity; migration of electronic records; metadata; legal issues; improved storage media; and projects for preservation of electronic records. (LRW)

  16. A Semantically Enabled Metadata Repository for Solar Irradiance Data Products

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.

    2014-12-01

    The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in

  17. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting

  18. Grounding Abstractness: Abstract Concepts and the Activation of the Mouth

    PubMed Central

    Borghi, Anna M.; Zarcone, Edoardo

    2016-01-01

    One key issue for theories of cognition is how abstract concepts, such as freedom, are represented. According to the WAT (Words As social Tools) proposal, abstract concepts activate both sensorimotor and linguistic/social information, and their acquisition modality involves the linguistic experience more than the acquisition of concrete concepts. We report an experiment in which participants were presented with abstract and concrete definitions followed by concrete and abstract target-words. When the definition and the word matched, participants were required to press a key, either with the hand or with the mouth. Response times and accuracy were recorded. As predicted, we found that abstract definitions and abstract words yielded slower responses and more errors compared to concrete definitions and concrete words. More crucially, there was an interaction between the target-words and the effector used to respond (hand, mouth). While responses with the mouth were overall slower, the advantage of the hand over the mouth responses was more marked with concrete than with abstract concepts. The results are in keeping with grounded and embodied theories of cognition and support the WAT proposal, according to which abstract concepts evoke linguistic-social information, hence activate the mouth. The mechanisms underlying the mouth activation with abstract concepts (re-enactment of acquisition experience, or re-explanation of the word meaning, possibly through inner talk) are discussed. To our knowledge this is the first behavioral study demonstrating with real words that the advantage of the hand over the mouth is more marked with concrete than with abstract concepts, likely because of the activation of linguistic information with abstract concepts. PMID:27777563

  19. Leveraging Metadata to Create Interactive Images... Today!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  20. Analyzing handwriting biometrics in metadata context

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Wolf, Franziska; Vielhauer, Claus

    2006-02-01

    In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits. Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional (changing) aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills. The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries (Germany, India and Italy) have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics. The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.

  1. Natural Language Processing Methods for Enhancing Geographic Metadata for Phylogeography of Zoonotic Viruses

    PubMed Central

    Tahsin, Tasnia; Beard, Rachel; Rivera, Robert; Lauder, Rob; Wallstrom, Garrick; Scotch, Matthew; Gonzalez, Graciela

    2014-01-01

    Zoonotic viruses represent emerging or re-emerging pathogens that pose significant public health threats throughout the world. It is therefore crucial to advance current surveillance mechanisms for these viruses through outlets such as phylogeography. Despite the abundance of zoonotic viral sequence data in publicly available databases such as GenBank, phylogeographic analysis of these viruses is often limited by the lack of adequate geographic metadata. However, many GenBank records include references to articles with more detailed information and automated systems may help extract this information efficiently and effectively. In this paper, we describe our efforts to determine the proportion of GenBank records with “insufficient” geographic metadata for seven well-studied viruses. We also evaluate the performance of four different Named Entity Recognition (NER) systems for automatically extracting related entities using a manually created gold-standard. PMID:25717409

  2. Streamlining The Exchange of Metadata through the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)

    NASA Astrophysics Data System (ADS)

    Ritz, S.; Major, G.; Olsen, L.

    2005-12-01

    NASA's Global Change Master Directory (GCMD) has been collaborating and exchanging science data and services metadata with many U.S. and international partners for several years. These exchanges were primarily manual and extremely time consuming. These exchange methods are no longer practical, because the volume of accessible metadata increases significantly each year. Furthermore, the growing number of accessible metadata repositories has demonstrated the need for a standardized protocol to simplify the harvesting task. The Open Archives Initiative answered the call with the PMH. The Protocol for Metadata Harvesting (PMH), developed through the Open Archives Initiative (OAI) community is making headway in reducing many barriers to the exchange of metadata. By providing a standardized protocol for retrieving metadata from a networked server, it is possible to harvest metadata content without needing to know the server's databases architecture. Streamlining the harvesting process is critical because it will reduce the time it takes for data producers to deliver their metadata to accessible directories. By using a harvester client capable of issuing OAI-PMH queries to an OAI-PMH server, all or portions of an external metadata database can be retrieved in a fast and efficient manner. The GCMD has developed an OAI-PMH compliant metadata harvester client for interoperating with several of its partners. The harvester client and server will be demonstrated. Testing and operational difficulties experienced in this process will also be discussed, along with current partnerships.

  3. Piaget on Abstraction.

    ERIC Educational Resources Information Center

    Moessinger, Pierre; Poulin-Dubois, Diane

    1981-01-01

    Reviews and discusses Piaget's recent work on abstract reasoning. Piaget's distinction between empirical and reflective abstraction is presented; his hypotheses are considered to be metaphorical. (Author/DB)

  4. An Approach to Metadata Generation for Learning Objects

    NASA Astrophysics Data System (ADS)

    Menendez D., Victor; Zapata G., Alfredo; Vidal C., Christian; Segura N., Alejandra; Prieto M., Manuel

    Metadata describe instructional resources and define their nature and use. Metadata are required to guarantee reusability and interchange of instructional resources into e-Learning systems. However, fulfilment of large metadata attributes is a hard and complex task for almost all LO developers. As a consequence many mistakes are made. This can cause the impoverishment of data quality in indexing, searching and recovering process. We propose a methodology to build Learning Objects from digital resources. The first phase includes automatic preprocessing of resources using techniques from information retrieval. Initial metadata obtained in this first phase are then used to search similar LO to propose missed metadata. The second phase considers assisted activities that merge computer advice with human decisions. Suggestions are based on metadata of similar Learning Object using fuzzy logic theory.

  5. Improving the accessibility and re-use of environmental models through provision of model metadata - a scoping study

    NASA Astrophysics Data System (ADS)

    Riddick, Andrew; Hughes, Andrew; Harpham, Quillon; Royse, Katherine; Singh, Anubha

    2014-05-01

    There has been an increasing interest both from academic and commercial organisations over recent years in developing hydrologic and other environmental models in response to some of the major challenges facing the environment, for example environmental change and its effects and ensuring water resource security. This has resulted in a significant investment in modelling by many organisations both in terms of financial resources and intellectual capital. To capitalise on the effort on producing models, then it is necessary for the models to be both discoverable and appropriately described. If this is not undertaken then the effort in producing the models will be wasted. However, whilst there are some recognised metadata standards relating to datasets these may not completely address the needs of modellers regarding input data for example. Also there appears to be a lack of metadata schemes configured to encourage the discovery and re-use of the models themselves. The lack of an established standard for model metadata is considered to be a factor inhibiting the more widespread use of environmental models particularly the use of linked model compositions which fuse together hydrologic models with models from other environmental disciplines. This poster presents the results of a Natural Environment Research Council (NERC) funded scoping study to understand the requirements of modellers and other end users for metadata about data and models. A user consultation exercise using an on-line questionnaire has been undertaken to capture the views of a wide spectrum of stakeholders on how they are currently managing metadata for modelling. This has provided a strong confirmation of our original supposition that there is a lack of systems and facilities to capture metadata about models. A number of specific gaps in current provision for data and model metadata were also identified, including a need for a standard means to record detailed information about the modelling

  6. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform

  7. The role of metadata in managing large environmental science datasets. Proceedings

    SciTech Connect

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  8. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  9. Content Metadata Standards for Marine Science: A Case Study

    USGS Publications Warehouse

    Riall, Rebecca L.; Marincioni, Fausto; Lightsom, Frances L.

    2004-01-01

    The U.S. Geological Survey developed a content metadata standard to meet the demands of organizing electronic resources in the marine sciences for a broad, heterogeneous audience. These metadata standards are used by the Marine Realms Information Bank project, a Web-based public distributed library of marine science from academic institutions and government agencies. The development and deployment of this metadata standard serve as a model, complete with lessons about mistakes, for the creation of similarly specialized metadata standards for digital libraries.

  10. Syntactic and Semantic Validation without a Metadata Management System

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Gokey, Christopher D.; Kendig, David; Olsen, Lola; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The ability to maintain quality information is essential to securing the confidence in any system for which the information serves as a data source. NASA's Global Change Master Directory (GCMD), an online Earth science data locator, holds over 9000 data set descriptions and is in a constant state of flux as metadata are created and updated on a daily basis. In such a system, the importance of maintaining the consistency and integrity of these-metadata is crucial. The GCMD has developed a metadata management system utilizing XML, controlled vocabulary, and Java technologies to ensure the metadata not only adhere to valid syntax, but also exhibit proper semantics.

  11. Automatic identification of abstract online groups

    DOEpatents

    Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

    2014-04-15

    Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

  12. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  13. Astronomy Visualization Metadata (AVM) in Action!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Gauthier, A.; Christensen, L. L.; Wyatt, R.

    2009-12-01

    The Astronomy Visualization Metadata (AVM) standard offers a flexible way of embedding extensive information about an astronomical image, illustration, or photograph within the image file itself. Such information includes a spread of basic info including title, caption, credit, and subject (based on a taxonomy optimized for outreach needs). Astronomical images may also be fully tagged with color assignments (associating wavelengths/observatories to colors in the image) and coordinate projection information. Here we present a status update on current ongoing projects utilizing AVM. Ongoing tagging efforts include a variety of missions and observers (including Spitzer, Chandra, Hubble, and amateurs), and the metadata is serving as a database schema for content-managed website development (Spitzer). Software packages are utilizing AVM coordinate headers to allow images to be tagged (FITS Liberator) and correctly registered against the sky backdrop on import (e.g. WorldWide Telescope, Google Sky). Museums and planetariums are exploring the use of AVM contextual information to enrich the presentation of images (partners include the American Museum of Natural History and the California Academy of Sciences). Astronomical institutions are adopting the AVM standard (e.g. IVOA) and planning services to embed and catalog AVM-tagged images (IRSA/VAMP, Aladin). More information is available at www.virtualastronomy.org

  14. Metadata Access Tool for Climate and Health

    NASA Astrophysics Data System (ADS)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  15. Identity and privacy. Unique in the shopping mall: on the reidentifiability of credit card metadata.

    PubMed

    de Montjoye, Yves-Alexandre; Radaelli, Laura; Singh, Vivek Kumar; Pentland, Alex Sandy

    2015-01-30

    Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata. PMID:25635097

  16. Identity and privacy. Unique in the shopping mall: on the reidentifiability of credit card metadata.

    PubMed

    de Montjoye, Yves-Alexandre; Radaelli, Laura; Singh, Vivek Kumar; Pentland, Alex Sandy

    2015-01-30

    Large-scale data sets of human behavior have the potential to fundamentally transform the way we fight diseases, design cities, or perform research. Metadata, however, contain sensitive information. Understanding the privacy of these data sets is key to their broad use and, ultimately, their impact. We study 3 months of credit card records for 1.1 million people and show that four spatiotemporal points are enough to uniquely reidentify 90% of individuals. We show that knowing the price of a transaction increases the risk of reidentification by 22%, on average. Finally, we show that even data sets that provide coarse information at any or all of the dimensions provide little anonymity and that women are more reidentifiable than men in credit card metadata.

  17. PoroTomo Subtask 6.3 Nodal Seismometers Metadata

    DOE Data Explorer

    Lesley Parker

    2016-03-28

    Metadata for the nodal seismometer array deployed at the POROTOMO's Natural Laboratory in Brady Hot Spring, Nevada during the March 2016 testing. Metadata includes location and timing for each instrument as well as file lists of data to be uploaded in a separate submission.

  18. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    ERIC Educational Resources Information Center

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  19. To Teach or Not to Teach: The Ethics of Metadata

    ERIC Educational Resources Information Center

    Barnes, Cynthia; Cavaliere, Frank

    2009-01-01

    Metadata is information about computer-generated documents that is often inadvertently transmitted to others. The problems associated with metadata have become more acute over time as word processing and other popular programs have become more receptive to the concept of collaboration. As more people become involved in the preparation of…

  20. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  1. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    PubMed

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  2. Psychological Abstracts/BRS.

    ERIC Educational Resources Information Center

    Dolan, Donna R.

    1978-01-01

    Discusses particular problems and possible solutions in searching the Psychological Abstracts database, with special reference to its loading on BRS. Included are examples of typical searches, citations (with or without abstract/annotation), a tabulated searchguide to Psychological Abstracts on BRS and specifications for the database. (Author/JD)

  3. Abstraction and Consolidation

    ERIC Educational Resources Information Center

    Monaghan, John; Ozmantar, Mehmet Fatih

    2006-01-01

    The framework for this paper is a recently developed theory of abstraction in context. The paper reports on data collected from one student working on tasks concerned with absolute value functions. It examines the relationship between mathematical constructions and abstractions. It argues that an abstraction is a consolidated construction that can…

  4. Abstraction and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Giunchiglia, Fausto

    1992-01-01

    In work done jointly with Toby Walsh, the author has provided a sound theoretical foundation to the process of reasoning with abstraction (GW90c, GWS9, GW9Ob, GW90a). The notion of abstraction formalized in this work can be informally described as: (property 1), the process of mapping a representation of a problem, called (following historical convention (Sac74)) the 'ground' representation, onto a new representation, called the 'abstract' representation, which, (property 2) helps deal with the problem in the original search space by preserving certain desirable properties and (property 3) is simpler to handle as it is constructed from the ground representation by "throwing away details". One desirable property preserved by an abstraction is provability; often there is a relationship between provability in the ground representation and provability in the abstract representation. Another can be deduction or, possibly inconsistency. By 'throwing away details' we usually mean that the problem is described in a language with a smaller search space (for instance a propositional language or a language without variables) in which formulae of the abstract representation are obtained from the formulae of the ground representation by the use of some terminating rewriting technique. Often we require that the use of abstraction results in more efficient .reasoning. However, it might simply increase the number of facts asserted (eg. by allowing, in practice, the exploration of deeper search spaces or by implementing some form of learning). Among all abstractions, three very important classes have been identified. They relate the set of facts provable in the ground space to those provable in the abstract space. We call: TI abstractions all those abstractions where the abstractions of all the provable facts of the ground space are provable in the abstract space; TD abstractions all those abstractions wllere the 'unabstractions' of all the provable facts of the abstract space are

  5. Meta-data based mediator generation

    SciTech Connect

    Critchlaw, T

    1998-06-28

    Mediators are a critical component of any data warehouse; they transform data from source formats to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases requires a mediator to be updated whenever an associated schema is modified. Failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the most current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. Unfortunately, this maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. This paper describes a meta-data framework, and associated software, designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting to schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  6. Unified Science Information Model for SoilSCAPE using the Mercury Metadata Search System

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Lu, Kefa; Palanisamy, Giri; Cook, Robert; Santhana Vannan, Suresh; Moghaddam, Mahta Clewley, Dan; Silva, Agnelo; Akbar, Ruzbeh

    2013-12-01

    SoilSCAPE (Soil moisture Sensing Controller And oPtimal Estimator) introduces a new concept for a smart wireless sensor web technology for optimal measurements of surface-to-depth profiles of soil moisture using in-situ sensors. The objective is to enable a guided and adaptive sampling strategy for the in-situ sensor network to meet the measurement validation objectives of spaceborne soil moisture sensors such as the Soil Moisture Active Passive (SMAP) mission. This work is being carried out at the University of Michigan, the Massachusetts Institute of Technology, University of Southern California, and Oak Ridge National Laboratory. At Oak Ridge National Laboratory we are using Mercury metadata search system [1] for building a Unified Information System for the SoilSCAPE project. This unified portal primarily comprises three key pieces: Distributed Search/Discovery; Data Collections and Integration; and Data Dissemination. Mercury, a Federally funded software for metadata harvesting, indexing, and searching would be used for this module. Soil moisture data sources identified as part of this activity such as SoilSCAPE and FLUXNET (in-situ sensors), AirMOSS (airborne retrieval), SMAP (spaceborne retrieval), and are being indexed and maintained by Mercury. Mercury would be the central repository of data sources for cal/val for soil moisture studies and would provide a mechanism to identify additional data sources. Relevant metadata from existing inventories such as ORNL DAAC, USGS Clearinghouse, ARM, NASA ECHO, GCMD etc. would be brought in to this soil-moisture data search/discovery module. The SoilSCAPE [2] metadata records will also be published in broader metadata repositories such as GCMD, data.gov. Mercury can be configured to provide a single portal to soil moisture information contained in disparate data management systems located anywhere on the Internet. Mercury is able to extract, metadata systematically from HTML pages or XML files using a variety of

  7. Semantic Metadata for Heterogeneous Spatial Planning Documents

    NASA Astrophysics Data System (ADS)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  8. Emedding MPEG-7 metadata within a media file format

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    Embedding metadata within a media file format becomes evermore popular for digital media. Traditional digital media files such as MP3 songs and JPEG photos do not carry any metadata structures to describe the media content until these file formats were extended with ID3 and EXIF. Recently both ID3 and EXIF advanced to version 2.4 and version 2.2 respectively with much added new description tags. Currently, most MP3 players and digital cameras support the latest revisions of these metadata structures as the de-facto standard formats. Given the benefits of having metadata to describe the media content is very critical to consumers for viewing and searching media content. However, both ID3 and EXIF were designed with very different approaches in terms of syntax, semantics, and data structures. Therefore, these two metadata file formats are not compatible and cannot be utilized for other common applications such as slideshow for playing MP3 music in the background and shuffle through images in the foreground. This paper presents the idea of embedding the international standard of ISO/IEC MPEG-7 metadata descriptions inside the rich ISO/IEC MPEG-4 file format container so that a general metadata framework can be used for images, audio, and video applications.

  9. Design and Implementation of a Metadata-rich File System

    SciTech Connect

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  10. Populating and harvesting metadata in a virtual observatory

    NASA Astrophysics Data System (ADS)

    Walker, Raymond; King, Todd; Joy, Steven; Bargatze, Lee; Chi, Peter; Weygand, James

    Founded in 2007 the Virtual Magnetospheric Observatory (VMO) provides one stop shopping for data and services useful in magnetospheric research. The VMO's purview includes ground based observations as well as observations from spacecraft. The data and services for using and analyzing these data are found at laboratories distributed around the world. The VMO is itself a federated data system with branches at UCLA and the Goddard Space Flight Center (GSFC). These data can be connected by using a common data model. The VMO has selected the Space Physics Archive Search and Extract (SPASE) metadata standard for this purpose. SPASE metadata are collected and stored in distributed registries that are maintained along with the data at the location of the data provider. Populating the registries and extracting the metadata requested for a given study remain major challenges. In general there is little or no money available to data providers to create the metadata and populate the registries. We have taken a two pronged approach to minimize the effort required to create the metadata and maintain the registries. First part of the approach is human. We have appointed a group of domain experts called "X-Men". X-Men are expert in both magnetospheric physics and data management. They work closely with data providers to help them prepare the metadata and populate the registries. The second part of our approach is to develop a series of tools to populate and harvest information from the registries. We have developed SPASE editors for high level metadata and adopted the NASA Planetary Data System's Rule Set approach in which the science data are used to generate detailed level SPASE metadata. Finally we have developed a unique harvesting system to retrieve metadata from distributed registries in response to user queries.

  11. Metadata Creation, Management and Search System for your Scientific Data

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  12. Mapping and converting essential Federal Geographic Data Committee (FGDC) metadata into MARC21 and Dublin Core: towards an alternative to the FGDC Clearinghouse

    USGS Publications Warehouse

    Chandler, A.; Foley, D.; Hafez, A.M.

    2000-01-01

    The purpose of this article is to raise and address a number of issues related to the conversion of Federal Geographic Data Committee metadata into MARC21 and Dublin Core. We present an analysis of 466 FGDC metadata records housed in the National Biological Information Infrastructure (NBII) node of the FGDC Clearinghouse, with special emphasis on the length of fields and the total length of records in this set. One of our contributions is a 34 element crosswalk, a proposal that takes into consideration the constraints of the MARC21 standard as implemented in OCLC's World Cat and the realities of user behavior.

  13. BioProject and BioSample databases at NCBI: facilitating capture and organization of metadata

    PubMed Central

    Barrett, Tanya; Clark, Karen; Gevorgyan, Robert; Gorelenkov, Vyacheslav; Gribov, Eugene; Karsch-Mizrachi, Ilene; Kimelman, Michael; Pruitt, Kim D.; Resenchuk, Sergei; Tatusova, Tatiana; Yaschenko, Eugene; Ostell, James

    2012-01-01

    As the volume and complexity of data sets archived at NCBI grow rapidly, so does the need to gather and organize the associated metadata. Although metadata has been collected for some archival databases, previously, there was no centralized approach at NCBI for collecting this information and using it across databases. The BioProject database was recently established to facilitate organization and classification of project data submitted to NCBI, EBI and DDBJ databases. It captures descriptive information about research projects that result in high volume submissions to archival databases, ties together related data across multiple archives and serves as a central portal by which to inform users of data availability. Concomitantly, the BioSample database is being developed to capture descriptive information about the biological samples investigated in projects. BioProject and BioSample records link to corresponding data stored in archival repositories. Submissions are supported by a web-based Submission Portal that guides users through a series of forms for input of rich metadata describing their projects and samples. Together, these databases offer improved ways for users to query, locate, integrate and interpret the masses of data held in NCBI's archival repositories. The BioProject and BioSample databases are available at http://www.ncbi.nlm.nih.gov/bioproject and http://www.ncbi.nlm.nih.gov/biosample, respectively. PMID:22139929

  14. Metadata requirements for results of diagnostic imaging procedures: a BIIF profile to support user applications

    NASA Astrophysics Data System (ADS)

    Brown, Nicholas J.; Lloyd, David S.; Reynolds, Melvin I.; Plummer, David L.

    2002-05-01

    A visible digital image is rendered from a set of digital image data. Medical digital image data can be stored as either: (a) pre-rendered format, corresponding to a photographic print, or (b) un-rendered format, corresponding to a photographic negative. The appropriate image data storage format and associated header data (metadata) required by a user of the results of a diagnostic procedure recorded electronically depends on the task(s) to be performed. The DICOM standard provides a rich set of metadata that supports the needs of complex applications. Many end user applications, such as simple report text viewing and display of a selected image, are not so demanding and generic image formats such as JPEG are sometimes used. However, these are lacking some basic identification requirements. In this paper we make specific proposals for minimal extensions to generic image metadata of value in various domains, which enable safe use in the case of two simple healthcare end user scenarios: (a) viewing of text and a selected JPEG image activated by a hyperlink and (b) viewing of one or more JPEG images together with superimposed text and graphics annotation using a file specified by a profile of the ISO/IEC Basic Image Interchange Format (BIIF).

  15. Loving Those Abstracts

    ERIC Educational Resources Information Center

    Stevens, Lori

    2004-01-01

    The author describes a lesson she did on abstract art with her high school art classes. She passed out a required step-by-step outline of the project process. She asked each of them to look at abstract art. They were to list five or six abstract artists they thought were interesting, narrow their list down to the one most personally intriguing,…

  16. Metadata and data models in the WMO Information System

    NASA Astrophysics Data System (ADS)

    Tandy, Jeremy; Woolf, Andrew; Foreman, Steve; Thomas, David

    2013-04-01

    It is fifty years since the inauguration of the World Weather Watch, through which the WMO (World Meteorological Organization) has coordinated real time exchange of information between national meteorological and hydrological services. At the heart of the data exchange are standard data formats and a dedicated telecommunications system known as the GTS - the Global Telecommunications System. Weather and climate information are now more complex than in the 1960s, and increasingly the information is being used across traditional disciplines. Although the modern GTS still underpins operational weather forecasting, the WMO Information System (WIS) builds on this to make the information more widely visible and more widely accessible. The architecture of WIS is built around three tiers of information provider. National Centres are responsible for sharing information that is gathered nationally, and also for distributing information to users within their country. Many of these are national weather services, but hydrology and oceanography centres have also been designated by some countries. Data Collection or Production Centres have an international role, either collating information from several countries, or generating information that is international in nature (satellite operators are an example). Global Information System Centres have two prime responsibilities: to exchange information between regions, and to publish the global WIS Discovery Metadata Catalogue so that end users can find out what information is available through the WIS. WIS is designed to allow information to be used outside the operational weather community. This means that it has to use protocols and standards that are in general use. The WIS Discovery Metadata records, for example, are expressed using ISO 19115, and in addition to being accessible through the GISCs they are harvested by GEOSS. Weather data are mainly exchanged in formats managed by WMO, but WMO is using GML and the Open Geospatial

  17. Forum Guide to Metadata: The Meaning behind Education Data. NFES 2009-805

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2009

    2009-01-01

    The purpose of this guide is to empower people to more effectively use data as information. To accomplish this, the publication explains what metadata are; why metadata are critical to the development of sound education data systems; what components comprise a metadata system; what value metadata bring to data management and use; and how to…

  18. Shared Semantics for Oceanographic Research: Development of Standard ``Cruise-Level'' Metadata

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Milan, A.; Chandler, C. L.; Miller, S. P.; Ferrini, V.; Mesick, S.; Mize, J.; Paver, C.; Sullivan, B.; Sweeney, A.

    2010-12-01

    There is a general need in the ocean science community for a widely accepted standards-based “cruise-level” metadata profile that describes the basic elements of a seagoing expedition (e.g. cruise identifier, vessel name, operating institution, dates/ports, navigation track, survey targets, science party, funding sources, scientific instruments, daughter platforms, and data sets). The need for such a profile is increasingly urgent as seagoing programs become more complex and interdisciplinary; funding agencies mandate public dissemination of the resulting data; and data centers link post-field/derived products to original field data sets. We are developing a standard implementation for cruise-level metadata that serves the needs of multiple U.S. programs, in an effort to promote interoperability and facilitate collaboration. Testbed development has focused on the Rolling Deck to Repository (R2R) and Extended Continental Shelf (ECS) programs - both tasked with routinely documenting and archiving large volumes of data from a wide array of U.S. research vessels - and draws from the cruise-level metadata profile published by the University-National Oceanographic Laboratory System (UNOLS) Data Management Best Practices Committee in 2008. Our XML implementation is based on the ISO 19115-2:2009 standard for geospatial metadata, with controlled vocabulary terms directly embedded as Uniform Resource Identifier (URI) references that can be validated in e.g. ISO Schematron. Our choice of the ISO standard reflects ANSI's adoption of the ISO 19115 North American Profile in 2009, and the adoption of ISO 19115 by related programs including the Integrated Ocean Drilling Program (IODP) and the SeaDataNet program in Europe. We envision a hierarchical framework where a single “cruise-level” record is linked to multiple “dataset-level” records that may be published independently. Our results published online will include a best practices guide for authoring records

  19. Mathematical Abstraction through Scaffolding

    ERIC Educational Resources Information Center

    Ozmantar, Mehmet Fatih; Roper, Tom

    2004-01-01

    This paper examines the role of scaffolding in the process of abstraction. An activity-theoretic approach to abstraction in context is taken. This examination is carried out with reference to verbal protocols of two 17 year-old students working together on a task connected to sketching the graph of |f|x|)|. Examination of the data suggests that…

  20. Is It Really Abstract?

    ERIC Educational Resources Information Center

    Kernan, Christine

    2011-01-01

    For this author, one of the most enjoyable aspects of teaching elementary art is the willingness of students to embrace the different styles of art introduced to them. In this article, she describes a project that allows upper-elementary students to learn about abstract art and the lives of some of the master abstract artists, implement the idea…

  1. Designing for Mathematical Abstraction

    ERIC Educational Resources Information Center

    Pratt, Dave; Noss, Richard

    2010-01-01

    Our focus is on the design of systems (pedagogical, technical, social) that encourage mathematical abstraction, a process we refer to as "designing for abstraction." In this paper, we draw on detailed design experiments from our research on children's understanding about chance and distribution to re-present this work as a case study in designing…

  2. Paper Abstract Animals

    ERIC Educational Resources Information Center

    Sutley, Jane

    2010-01-01

    Abstraction is, in effect, a simplification and reduction of shapes with an absence of detail designed to comprise the essence of the more naturalistic images being depicted. Without even intending to, young children consistently create interesting, and sometimes beautiful, abstract compositions. A child's creations, moreover, will always seem to…

  3. Leadership Abstracts, 1995.

    ERIC Educational Resources Information Center

    Johnson, Larry, Ed.

    1995-01-01

    The abstracts in this series provide two-page discussions of issues related to leadership, administration, and teaching in community colleges. The 12 abstracts for Volume 8, 1995, are: (1) "Redesigning the System To Meet the Workforce Training Needs of the Nation," by Larry Warford; (2) "The College President, the Board, and the Board Chair: A…

  4. Concept Formation and Abstraction.

    ERIC Educational Resources Information Center

    Lunzer, Eric A.

    1979-01-01

    This paper examines the nature of concepts and conceptual processes and the manner of their formation. It argues that a process of successive abstraction and systematization is central to the evolution of conceptual structures. Classificatory processes are discussed and three levels of abstraction outlined. (Author/SJL)

  5. Data Abstraction in GLISP.

    ERIC Educational Resources Information Center

    Novak, Gordon S., Jr.

    GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…

  6. Leadership Abstracts, Volume 10.

    ERIC Educational Resources Information Center

    Milliron, Mark D., Ed.

    1997-01-01

    The abstracts in this series provide brief discussions of issues related to leadership, administration, professional development, technology, and education in community colleges. Volume 10 for 1997 contains the following 12 abstracts: (1) "On Community College Renewal" (Nathan L. Hodges and Mark D. Milliron); (2) "The Community College Niche in a…

  7. Abstract Datatypes in PVS

    NASA Technical Reports Server (NTRS)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  8. Abstract coherent categories.

    PubMed

    Rehder, B; Ross, B H

    2001-09-01

    Many studies have demonstrated the importance of the knowledge that interrelates features in people's mental representation of categories and that makes our conception of categories coherent. This article focuses on abstract coherent categories, coherent categories that are also abstract because they are defined by relations independently of any features. Four experiments demonstrate that abstract coherent categories are learned more easily than control categories with identical features and statistical structure, and also that participants induced an abstract representation of the category by granting category membership to exemplars with completely novel features. The authors argue that the human conceptual system is heavily populated with abstract coherent concepts, including conceptions of social groups, societal institutions, legal, political, and military scenarios, and many superordinate categories, such as classes of natural kinds. PMID:11550753

  9. hFits: From Storing Metadata to Publishing ESO Data

    NASA Astrophysics Data System (ADS)

    Vera, I.; Dobrzycki, A.; Vuong, M.; Da Rocha, C.

    2012-09-01

    The ESO Archive holds ca. 20 million FITS files: raw observations taken at the La Silla Paranal Observatory in Chile, data from APEX and UKIRT(WFCAM) telescopes, pipeline-processed data generated by the Quality Control and Data Processing Group in ESO Garching and, since recently, reduced data delivered by the PI's through the ESO Phase 3 infrastructure. A metadata repository has been developed at the ESO Archive (Dobrzycki et al. 2007), (Vera et al. 2011), to hold all the FITS file headers with up-to-date information using data warehouse technology. Presently, the repository contains more that 10 billion keywords from headers of all ESO FITS files. We have added to the repository a mechanism for keeping track of header ingestion and modification, allowing to build incremental applications on top of it. The aim is to provide a framework allowing for creation of fast and good quality metadata query services. We present hFits, a tool for data publishing allowing for metadata enhancement. The tool reads from the metadata repository and inserts the metadata into the conventional relational database systems using a simple configuration framework. It utilises the metadata repository tracking mechanism to incrementally refresh the services and transparently propagate any metadata updates. It supports the use of user defined functions where, for example, WCS coordinates can be calculated or related metadata can be extracted from other information systems, and for each new header file it provides to the archived file the access attributes following ESO data access policy, publishing the data to the community.

  10. Massive Meta-Data: A New Data Mining Resource

    NASA Astrophysics Data System (ADS)

    Hugo, W.

    2012-04-01

    Worldwide standardisation, and interoperability initiatives such as GBIF, Open Access and GEOSS (to name but three of many) have led to the emergence of interlinked and overlapping meta-data repositories containing, potentially, tens of millions of entries collectively. This forms the backbone of an emerging global scientific data infrastructure that is both driven by changes in the way we work, and opens up new possibilities in management, research, and collaboration. Several initiatives are concentrated on building a generalised, shared, easily available, scalable, and indefinitely preserved scientific data infrastructure to aid future scientific work. This paper deals with the parallel aspect of the meta-data that will be used to support the global scientific data infrastructure. There are obvious practical issues (semantic interoperability and speed of discovery being the most important), but we are here more concerned with some of the less obvious conceptual questions and opportunities: 1. Can we use meta-data to assess, pinpoint, and reduce duplication of meta-data? 2. Can we use it to reduce overlaps of mandates in data portals, research collaborations, and research networks? 3. What possibilities exist for mining the relationships that exist implicitly in very large meta-data collections? 4. Is it possible to define an explicit 'scientific data infrastructure' as a complex, multi-relational network database, that can become self-maintaining and self-organising in true Web 2.0 and 'social networking' fashion? The paper provides a blueprint for a new approach to massive meta-data collections, and how this can be processed using established analysis techniques to answer the questions posed. It assesses the practical implications of working with standard meta-data definitions (such as ISO 19115, Dublin Core, and EML) in a meta-data mining context, and makes recommendations in respect of extension to support self-organising, semantically oriented 'networks of

  11. The importance of metrological metadata in the environmental monitoring

    NASA Astrophysics Data System (ADS)

    Santana, Márcio A. A.; Guimarães, Patrícia L. O.; Almêida, Eugênio S.; Eklin, Tero

    2016-07-01

    The metrological metadata propagation contributes significantly to improve the data analysis of the meteorological observation systems. An overview of the scenarios data and metadata treatment in environmental monitoring is presented in this article. We also discussed the ways of use of the calibration results on the meteorological measurement systems as well as the convergence of the methods used in the corrections treatment and estimation of the measuring uncertainty in metrological and meteorological areas.

  12. The notes from nature tool for unlocking biodiversity records from museum records through citizen science

    PubMed Central

    Hill, Andrew; Guralnick, Robert; Smith, Arfon; Sallans, Andrew; Rosemary Gillespie; Denslow, Michael; Gross, Joyce; Murrell, Zack; Tim Conyers; Oboyski, Peter; Ball, Joan; Thomer, Andrea; Prys-Jones, Robert; de Torre, Javier; Kociolek, Patrick; Fortson, Lucy

    2012-01-01

    Abstract Legacy data from natural history collections contain invaluable and irreplaceable information about biodiversity in the recent past, providing a baseline for detecting change and forecasting the future of biodiversity on a human-dominated planet. However, these data are often not available in formats that facilitate use and synthesis. New approaches are needed to enhance the rates of digitization and data quality improvement. Notes from Nature provides one such novel approach by asking citizen scientists to help with transcription tasks. The initial web-based prototype of Notes from Nature is soon widely available and was developed collaboratively by biodiversity scientists, natural history collections staff, and experts in citizen science project development, programming and visualization. This project brings together digital images representing different types of biodiversity records including ledgers , herbarium sheets and pinned insects from multiple projects and natural history collections. Experts in developing web-based citizen science applications then designed and built a platform for transcribing textual data and metadata from these images. The end product is a fully open source web transcription tool built using the latest web technologies. The platform keeps volunteers engaged by initially explaining the scientific importance of the work via a short orientation, and then providing transcription “missions” of well defined scope, along with dynamic feedback, interactivity and rewards. Transcribed records, along with record-level and process metadata, are provided back to the institutions.  While the tool is being developed with new users in mind, it can serve a broad range of needs from novice to trained museum specialist. Notes from Nature has the potential to speed the rate of biodiversity data being made available to a broad community of users. PMID:22859890

  13. Evolving Metadata in NASA Earth Science Data Systems

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  14. 2016 ACPA MEETING ABSTRACTS.

    PubMed

    2016-07-01

    The peer-reviewed abstracts presented at the 73rd Annual Meeting of the ACPA are published as submitted by the authors. For financial conflict of interest disclosure, please visit http://meeting.acpa-cpf.org/disclosures.html. PMID:27447885

  15. Abstracts--Citations

    ERIC Educational Resources Information Center

    Occupational Mental Health, 1971

    1971-01-01

    Provides abstracts and citations of journal articles and reports dealing with aspects of mental health. Topics include alcoholism, drug abuse, disadvantaged, mental health programs, rehabilitation, student mental health, and others. (SB)

  16. Automatic Abstraction in Planning

    NASA Technical Reports Server (NTRS)

    Christensen, J.

    1991-01-01

    Traditionally, abstraction in planning has been accomplished by either state abstraction or operator abstraction, neither of which has been fully automatic. We present a new method, predicate relaxation, for automatically performing state abstraction. PABLO, a nonlinear hierarchical planner, implements predicate relaxation. Theoretical, as well as empirical results are presented which demonstrate the potential advantages of using predicate relaxation in planning. We also present a new definition of hierarchical operators that allows us to guarantee a limited form of completeness. This new definition is shown to be, in some ways, more flexible than previous definitions of hierarchical operators. Finally, a Classical Truth Criterion is presented that is proven to be sound and complete for a planning formalism that is general enough to include most classical planning formalisms that are based on the STRIPS assumption.

  17. Introducing Abstract Design

    ERIC Educational Resources Information Center

    Ciscell, Bob

    1973-01-01

    A functional approach involving collage, two-dimensional design, three-dimensional construction, and elements of Cubism, is used to teach abstract design in elementary and junior high school art classes. (DS)

  18. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1991

    1991-01-01

    Presents abstracts of 36 special interest group (SIG) sessions. Highlights include the Chemistry Online Retrieval Experiment; organizing and retrieving images; intelligent information retrieval using natural language processing; interdisciplinarity; libraries as publishers; indexing hypermedia; cognitive aspects of classification; computer-aided…

  19. 1971 Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1971

    1971-01-01

    Included are 112 abstracts listed under headings such as: acoustics, continuing engineering studies, educational research and methods, engineering design, libraries, liberal studies, and materials. Other areas include agricultural, electrical, mechanical, mineral, and ocean engineering. (TS)

  20. Paradigms for Abstracting Systems.

    ERIC Educational Resources Information Center

    Pinto, Maria; Galvez, Carmen

    1999-01-01

    Discussion of abstracting systems focuses on the paradigm concept and identifies and explains four paradigms: communicational, or information theory; physical, including information retrieval; cognitive, including information processing and artificial intelligence; and systemic, including quality management. Emphasizes multidimensionality and…

  1. Abstracts of contributed papers

    SciTech Connect

    Not Available

    1994-08-01

    This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.

  2. A Metadata Management Framework for Collaborative Review of Science Data Products

    NASA Astrophysics Data System (ADS)

    Hart, A. F.; Cinquini, L.; Mattmann, C. A.; Thompson, D. R.; Wagstaff, K.; Zimdars, P. A.; Jones, D. L.; Lazio, J.; Preston, R. A.

    2012-12-01

    Data volumes generated by modern scientific instruments often preclude archiving the complete observational record. To compensate, science teams have developed a variety of "triage" techniques for identifying data of potential scientific interest and marking it for prioritized processing or permanent storage. This may involve multiple stages of filtering with both automated and manual components operating at different timescales. A promising approach exploits a fast, fully automated first stage followed by a more reliable offline manual review of candidate events. This hybrid approach permits a 24-hour rapid real-time response while also preserving the high accuracy of manual review. To support this type of second-level validation effort, we have developed a metadata-driven framework for the collaborative review of candidate data products. The framework consists of a metadata processing pipeline and a browser-based user interface that together provide a configurable mechanism for reviewing data products via the web, and capturing the full stack of associated metadata in a robust, searchable archive. Our system heavily leverages software from the Apache Object Oriented Data Technology (OODT) project, an open source data integration framework that facilitates the construction of scalable data systems and places a heavy emphasis on the utilization of metadata to coordinate processing activities. OODT provides a suite of core data management components for file management and metadata cataloging that form the foundation for this effort. The system has been deployed at JPL in support of the V-FASTR experiment [1], a software-based radio transient detection experiment that operates commensally at the Very Long Baseline Array (VLBA), and has a science team that is geographically distributed across several countries. Daily review of automatically flagged data is a shared responsibility for the team, and is essential to keep the project within its resource constraints. We

  3. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  4. Metacognition and abstract reasoning.

    PubMed

    Markovits, Henry; Thompson, Valerie A; Brisson, Janie

    2015-05-01

    The nature of people's meta-representations of deductive reasoning is critical to understanding how people control their own reasoning processes. We conducted two studies to examine whether people have a metacognitive representation of abstract validity and whether familiarity alone acts as a separate metacognitive cue. In Study 1, participants were asked to make a series of (1) abstract conditional inferences, (2) concrete conditional inferences with premises having many potential alternative antecedents and thus specifically conducive to the production of responses consistent with conditional logic, or (3) concrete problems with premises having relatively few potential alternative antecedents. Participants gave confidence ratings after each inference. Results show that confidence ratings were positively correlated with logical performance on abstract problems and concrete problems with many potential alternatives, but not with concrete problems with content less conducive to normative responses. Confidence ratings were higher with few alternatives than for abstract content. Study 2 used a generation of contrary-to-fact alternatives task to improve levels of abstract logical performance. The resulting increase in logical performance was mirrored by increases in mean confidence ratings. Results provide evidence for a metacognitive representation based on logical validity, and show that familiarity acts as a separate metacognitive cue.

  5. Abstracting and indexing guide

    USGS Publications Warehouse

    U.S. Department of the Interior; Office of Water Resources Research

    1974-01-01

    These instructions have been prepared for those who abstract and index scientific and technical documents for the Water Resources Scientific Information Center (WRSIC). With the recent publication growth in all fields, information centers have undertaken the task of keeping the various scientific communities aware of current and past developments. An abstract with carefully selected index terms offers the user of WRSIC services a more rapid means for deciding whether a document is pertinent to his needs and professional interests, thus saving him the time necessary to scan the complete work. These means also provide WRSIC with a document representation or surrogate which is more easily stored and manipulated to produce various services. Authors are asked to accept the responsibility for preparing abstracts of their own papers to facilitate quick evaluation, announcement, and dissemination to the scientific community.

  6. Thyra Abstract Interface Package

    2005-09-01

    Thrya primarily defines a set of abstract C++ class interfaces needed for the development of abstract numerical atgorithms (ANAs) such as iterative linear solvers, transient solvers all the way up to optimization. At the foundation of these interfaces are abstract C++ classes for vectors, vector spaces, linear operators and multi-vectors. Also included in the Thyra package is C++ code for creating concrete vector, vector space, linear operator, and multi-vector subclasses as well as other utilitiesmore » to aid in the development of ANAs. Currently, very general and efficient concrete subclass implementations exist for serial and SPMD in-core vectors and multi-vectors. Code also currently exists for testing objects and providing composite objects such as product vectors.« less

  7. CyberSKA Radio Imaging Metadata and VO Compliance Engineering

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.; Rosolowsky, E.; Dowler, P.

    2013-10-01

    The CyberSKA project has written a specification for the metadata encapsulation of radio astronomy data products pursuant to insertion into the VO-compliant Common Archive Observation Model (CAOM) database hosted by the Canadian Astronomy Data Centre (CADC). This specification accommodates radio FITS Image and UV Visibility data, as well as pure CASA Tables Imaging and Visibility Measurement Sets. To extract and engineer radio metadata, we have authored two software packages: metaData (v0.5.0) and mddb (v1.3). Together, these Python packages can convert all the above stated data format types into concise FITS-like header files, engineer the metadata to conform to the CAOM data model, and then insert these engineered data into the CADC database, which subsequently becomes published through the Canadian Virtual Observatory. The metaData and mddb packages have, for the first time, published ALMA imaging data on VO services. Our ongoing work aims to integrate visibility data from ALMA and the SKA into VO services and to enable user-submitted radio data to move seamlessly into the Virtual Observatory.

  8. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  9. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Engineering Education, 1976

    1976-01-01

    Presents the abstracts of 158 papers presented at the American Society for Engineering Education's annual conference at Knoxville, Tennessee, June 14-17, 1976. Included are engineering topics covering education, aerospace, agriculture, biomedicine, chemistry, computers, electricity, acoustics, environment, mechanics, and women. (SL)

  10. Seismic Consequence Abstraction

    SciTech Connect

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  11. Abstraction through Game Play

    ERIC Educational Resources Information Center

    Avraamidou, Antri; Monaghan, John; Walker, Aisha

    2012-01-01

    This paper examines the computer game play of an 11-year-old boy. In the course of building a virtual house he developed and used, without assistance, an artefact and an accompanying strategy to ensure that his house was symmetric. We argue that the creation and use of this artefact-strategy is a mathematical abstraction. The discussion…

  12. Making the Abstract Concrete

    ERIC Educational Resources Information Center

    Potter, Lee Ann

    2005-01-01

    President Ronald Reagan nominated a woman to serve on the United States Supreme Court. He did so through a single-page form letter, completed in part by hand and in part by typewriter, announcing Sandra Day O'Connor as his nominee. While the document serves as evidence of a historic event, it is also a tangible illustration of abstract concepts…

  13. Annual Conference Abstracts

    ERIC Educational Resources Information Center

    Journal of Engineering Education, 1972

    1972-01-01

    Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…

  14. Computers in Abstract Algebra

    ERIC Educational Resources Information Center

    Nwabueze, Kenneth K.

    2004-01-01

    The current emphasis on flexible modes of mathematics delivery involving new information and communication technology (ICT) at the university level is perhaps a reaction to the recent change in the objectives of education. Abstract algebra seems to be one area of mathematics virtually crying out for computer instructional support because of the…

  15. 2002 NASPSA Conference Abstracts.

    ERIC Educational Resources Information Center

    Journal of Sport & Exercise Psychology, 2002

    2002-01-01

    Contains abstracts from the 2002 conference of the North American Society for the Psychology of Sport and Physical Activity. The publication is divided into three sections: the preconference workshop, "Effective Teaching Methods in the Classroom;" symposia (motor development, motor learning and control, and sport psychology); and free…

  16. Abstracts of SIG Sessions.

    ERIC Educational Resources Information Center

    Proceedings of the ASIS Annual Meeting, 1995

    1995-01-01

    Presents abstracts of 15 special interest group (SIG) sessions. Topics include navigation and information utilization in the Internet, natural language processing, automatic indexing, image indexing, classification, users' models of database searching, online public access catalogs, education for information professions, information services,…

  17. Abstraction and art.

    PubMed Central

    Gortais, Bernard

    2003-01-01

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music. PMID:12903659

  18. Leadership Abstracts, 2002.

    ERIC Educational Resources Information Center

    Wilson, Cynthia, Ed.; Milliron, Mark David, Ed.

    2002-01-01

    This 2002 volume of Leadership Abstracts contains issue numbers 1-12. Articles include: (1) "Skills Certification and Workforce Development: Partnering with Industry and Ourselves," by Jeffrey A. Cantor; (2) "Starting Again: The Brookhaven Success College," by Alice W. Villadsen; (3) "From Digital Divide to Digital Democracy," by Gerardo E. de los…

  19. Water reuse. [Lead abstract

    SciTech Connect

    Middlebrooks, E.J.

    1982-01-01

    Separate abstracts were prepared for the 31 chapters of this book which deals with all aspects of wastewater reuse. Design data, case histories, performance data, monitoring information, health information, social implications, legal and organizational structures, and background information needed to analyze the desirability of water reuse are presented. (KRM)

  20. Abstract Film and Beyond.

    ERIC Educational Resources Information Center

    Le Grice, Malcolm

    A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first…

  1. Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors

    NASA Astrophysics Data System (ADS)

    Firlej, M.; Kresse, W.

    2016-06-01

    The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.

  2. MATCH: Metadata Access Tool for Climate and Health Datasets

    DOE Data Explorer

    MATCH is a searchable clearinghouse of publicly available Federal metadata (i.e. data about data) and links to datasets. Most metadata on MATCH pertain to geospatial data sets ranging from local to global scales. The goals of MATCH are to: 1) Provide an easily accessible clearinghouse of relevant Federal metadata on climate and health that will increase efficiency in solving research problems; 2) Promote application of research and information to understand, mitigate, and adapt to the health effects of climate change; 3) Facilitate multidirectional communication among interested stakeholders to inform and shape Federal research directions; 4) Encourage collaboration among traditional and non-traditional partners in development of new initiatives to address emerging climate and health issues. [copied from http://match.globalchange.gov/geoportal/catalog/content/about.page

  3. Principles of metadata organization at the ENCODE data coordination center.

    PubMed

    Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org.

  4. The Challenges in Metadata Management: 20+ Years of ESO Data

    NASA Astrophysics Data System (ADS)

    Vera, I.; Da Rocha, C.; Dobrzycki, A.; Micol, A.; Vuong, M.

    2015-09-01

    The European Southern Observatory Science Archive Facility has been in operations for more than 20 years. It contains data produced by ESO telescopes as well as the metadata needed for characterizing and distributing those data. This metadata is used to build the different archive services provided by the Archive. Over these years, services have been added, modified or even decommissioned creating a cocktail of new, evolved and legacy data systems. The challenge for the Archive is to harmonize the differences of those data systems to provide the community with a homogeneous experience when using ESO data. In this paper, we present ESO experience in three particular challenging areas. First discussion is dedicated to the problem of metadata quality over the time, second discusses how to integrate obsolete data models on the current services and finally we will present the challenges of ever growing databases. We describe our experience dealing with those issues and the solutions adopted to mitigate them.

  5. Data warehousing, metadata, and the World Wide Web

    SciTech Connect

    Yow, T.G.; Smith, A.W.; Daugherty, P.F.

    1997-04-16

    The connection between data warehousing and the metadata. used to catalog and locate warehouse data is obvious, but what is the connection between data warehousing, metadata, and the World Wide Web (WWW)? Specifically, the WWW can be used to allow users to search metadata (data about the data) and retrieve data from a warehouse database. In addition, the Internet/Intranet can be used to manage the metadata in archive databases and to streamline the database administration functions of a large archive center. The Oak Ridge National Laboratory`s (ORNL`s) Distributed Active Archive Center (DAAC) is a data archive and distribution center for the National Air and Space Administration`s (NASA`s) Earth Observing System Data and Information System (EOSDIS); the ORNL DAAC provides access to tabular and imagery datasets used in ecological and environmental research. To support this effort, we have taken advantage of the rather unique and user-friendly features of the WWW to (1) allow users to search for and download the data we archive and (2) provide DAAC developers with effective metadata and data management tools. In particular, the ORNL DAAC has developed the Biogeochemical Information Ordering Management Environment (BIOME), a WWW search-and-order system, as well as a WWW-based database administrator`s (DBA`s) tool suite designed to assist the site`s DBA in the management of archive metadata and databases and several other DBA functions that are essential to site management. This paper is a case study of how the ORNL DAAC uses the WWW to both manage data and allow access to its data warehouse.

  6. In Interactive, Web-Based Approach to Metadata Authoring

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  7. An Abstract Data Interface

    NASA Astrophysics Data System (ADS)

    Allan, D. J.

    The Abstract Data Interface (ADI) is a system within which both abstract data models and their mappings on to file formats can be defined. The data model system is object-oriented and closely follows the Common Lisp Object System (CLOS) object model. Programming interfaces in both C and \\fortran are supplied, and are designed to be simple enough for use by users with limited software skills. The prototype system supports access to those FITS formats most commonly used in the X-ray community, as well as the Starlink NDF data format. New interfaces can be rapidly added to the system---these may communicate directly with the file system, other ADI objects or elsewhere (e.g., a network connection).

  8. Meeting Abstracts - Nexus 2015.

    PubMed

    2015-10-01

    The AMCP Abstracts program provides a forum through which authors can share their insights and outcomes of advanced managed care practice through publication in AMCP's Journal of Managed Care Specialty Pharmacy (JMCP). Of the abstracts accepted for publication, most are presented as posters, so interested AMCP meeting attendees can review findings and query authors. The main poster presentation is Tuesday, October 27, 2015; posters are also displayed on Wednesday, October 28, 2015. The AMCP Nexus 2015 in Orlando, Florida, is expected to attract more than 3,500 managed care pharmacists and other health care professionals who manage and evaluate drug therapies, develop and manage networks, and work with medical managers and information specialists to improve the care of all individuals enrolled in managed care programs.  Abstracts were submitted in the following categories:  Research Report: describe completed original research on managed care pharmacy services or health care interventions. Examples include (but are not limited to) observational studies using administrative claims, reports of the impact of unique benefit design strategies, and analyses of the effects of innovative administrative or clinical programs.Economic Model: describe models that predict the effect of various benefit design or clinical decisions on a population. For example, an economic model could be used to predict the budget impact of a new pharmaceutical product on a health care system. Solving Problems in Managed Care: describe the specific steps taken to introduce a needed change, develop and implement a new system or program, plan and organize an administrative function, or solve other types of problems in managed care settings. These abstracts describe a course of events; they do not test a hypothesis, but they may include data.

  9. Generalized Abstract Symbolic Summaries

    NASA Technical Reports Server (NTRS)

    Person, Suzette; Dwyer, Matthew B.

    2009-01-01

    Current techniques for validating and verifying program changes often consider the entire program, even for small changes, leading to enormous V&V costs over a program s lifetime. This is due, in large part, to the use of syntactic program techniques which are necessarily imprecise. Building on recent advances in symbolic execution of heap manipulating programs, in this paper, we develop techniques for performing abstract semantic differencing of program behaviors that offer the potential for improved precision.

  10. A novel framework for assessing metadata quality in epidemiological and public health research settings

    PubMed Central

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly. PMID:27570670

  11. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J. Prouty

    2006-07-14

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport

  12. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  13. Turning Data into Information: Assessing and Reporting GIS Metadata Integrity Using Integrated Computing Technologies

    ERIC Educational Resources Information Center

    Mulrooney, Timothy J.

    2009-01-01

    A Geographic Information System (GIS) serves as the tangible and intangible means by which spatially related phenomena can be created, analyzed and rendered. GIS metadata serves as the formal framework to catalog information about a GIS data set. Metadata is independent of the encoded spatial and attribute information. GIS metadata is a subset of…

  14. A Model for the Creation of Human-Generated Metadata within Communities

    ERIC Educational Resources Information Center

    Brasher, Andrew; McAndrew, Patrick

    2005-01-01

    This paper considers situations for which detailed metadata descriptions of learning resources are necessary, and focuses on human generation of such metadata. It describes a model which facilitates human production of good quality metadata by the development and use of structured vocabularies. Using examples, this model is applied to single and…

  15. Map Metadata: Essential Elements for Search and Storage

    ERIC Educational Resources Information Center

    Beamer, Ashley

    2009-01-01

    Purpose: The purpose of this paper is to develop an understanding of the issues surrounding the cataloguing of maps in archives and libraries. An investigation into appropriate metadata formats, such as MARC21, EAD and Dublin Core with RDF, shows how particular map data can be stored. Mathematical map elements, specifically co-ordinates, are…

  16. Big Earth Data Initiative: Metadata Improvement: Case Studies

    NASA Technical Reports Server (NTRS)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  17. Scalable PGAS Metadata Management on Extreme Scale Systems

    SciTech Connect

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    2013-05-16

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributed data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.

  18. Syndicating Rich Bibliographic Metadata Using MODS and RSS

    ERIC Educational Resources Information Center

    Ashton, Andrew

    2008-01-01

    Many libraries use RSS to syndicate information about their collections to users. A survey of 65 academic libraries revealed their most common use for RSS is to disseminate information about library holdings, such as lists of new acquisitions. Even though typical RSS feeds are ill suited to the task of carrying rich bibliographic metadata, great…

  19. Training and Best Practice Guidelines: Implications for Metadata Creation

    ERIC Educational Resources Information Center

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  20. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    ERIC Educational Resources Information Center

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  1. Metadata and Annotations for Multi-scale Electrophysiological Data

    PubMed Central

    Bower, Mark R.; Stead, Matt; Brinkmann, Benjamin H.; Dufendach, Kevin; Worrell, Gregory A.

    2010-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for “big data” files is presented. PMID:19964266

  2. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    ERIC Educational Resources Information Center

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  3. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    NASA Astrophysics Data System (ADS)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  4. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.

    2015-12-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.

  5. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems. PMID:23751263

  6. A federated semantic metadata registry framework for enabling interoperability across clinical research and care domains.

    PubMed

    Sinaci, A Anil; Laleci Erturkmen, Gokce B

    2013-10-01

    In order to enable secondary use of Electronic Health Records (EHRs) by bridging the interoperability gap between clinical care and research domains, in this paper, a unified methodology and the supporting framework is introduced which brings together the power of metadata registries (MDR) and semantic web technologies. We introduce a federated semantic metadata registry framework by extending the ISO/IEC 11179 standard, and enable integration of data element registries through Linked Open Data (LOD) principles where each Common Data Element (CDE) can be uniquely referenced, queried and processed to enable the syntactic and semantic interoperability. Each CDE and their components are maintained as LOD resources enabling semantic links with other CDEs, terminology systems and with implementation dependent content models; hence facilitating semantic search, much effective reuse and semantic interoperability across different application domains. There are several important efforts addressing the semantic interoperability in healthcare domain such as IHE DEX profile proposal, CDISC SHARE and CDISC2RDF. Our architecture complements these by providing a framework to interlink existing data element registries and repositories for multiplying their potential for semantic interoperability to a greater extent. Open source implementation of the federated semantic MDR framework presented in this paper is the core of the semantic interoperability layer of the SALUS project which enables the execution of the post marketing safety analysis studies on top of existing EHR systems.

  7. Representation, Visualization, and Metadata for Surface Deformation ESDRs derived from GPS Time Series

    NASA Astrophysics Data System (ADS)

    Squibb, M. B.; Bock, Y.; Webb, F.; Moore, A. W.; Fang, P.; Kedar, S.; Owen, S. E.; Liu, Z.; Prawirodirdjo, L. M.

    2012-12-01

    As part of a NASA MEaSUREs project and its contribution to EarthScope, we are producing a combined 24-hour position time series for more than 1000 stations in Western North America based on independent analyses of continuous GPS data at JPL (using GIPSY software) and at SIO (using GAMIT software), using the SOPAC archive as a common source of metadata. Included are all EarthScope/PBO stations as well as stations from other networks still active (SCIGN, BARD and PANGA), and pre-PBO era data some already two decades old. The time series are appended weekly and the entire data set is filtered once a week using a modified principal component analysis (PCA) algorithm. Both the unfiltered and filtered data undergo a time series analysis with QOCA software. All relevant time series are available through NASA's GPS Explorer data portal and its interactive Java-based time series utility. We present an overview of our workflow for generating SESES ESDRs, quality checking, and uncertainty estimates for positions and time series model parameters such as velocities and offsets. We will also present how the data record uncertainties are stored in metadata and presented through GPS Explorer. The importance of understanding these uncertainties will be demonstrated through examples of transient deformation signals, that is time series that deviate from linear behavior due to coseismic and postseismic deformation, slow slip events, volcanic events, and strain anomalies.

  8. EBS Radionuclide Transport Abstraction

    SciTech Connect

    J.D. Schreiber

    2005-08-25

    The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers

  9. A Solr Powered Architecture for Scientific Metadata Search Applications

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Billingsley, B. W.; Harper, D.; Kovarik, J.; Brandt, M.

    2014-12-01

    Discovering and obtaining resources for scientific research is increasingly difficult but Open Source tools have been implemented to provide inexpensive solutions for scientific metadata search applications. Common practices used in modern web applications can improve the quality of scientific data as well as increase availability to a wider audience while reducing costs of maintenance. Motivated to improve discovery and access of scientific metadata hosted at NSIDC and the need to aggregate many areas of arctic research, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) contributed to a shared codebase used by the NSIDC Search and Arctic Data Explorer (ADE) portals. We implemented the NSIDC Search and ADE to improve search and discovery of scientific metadata in many areas of cryospheric research. All parts of the applications are available free and open for reuse in other applications and portals. We have applied common techniques that are widely used by search applications around the web and with the goal of providing quick and easy access to scientific metadata. We adopted keyword search auto-suggest which provides a dynamic list of terms and phrases that closely match characters as the user types. Facet queries are another technique we have implemented to filter results based on aspects of the data like the instrument used or temporal duration of the data set. Service APIs provide a layer between the interface and the database and are shared between the NSIDC Search and ACADIS ADE interfaces. We also implemented a shared data store between both portals using Apache Solr (an Open Source search engine platform that stores and indexes XML documents) and leverage many powerful features including geospatial search and faceting. This presentation will discuss the application architecture as well as tools and techniques used to enhance search and discovery of scientific metadata.

  10. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  11. A LARI Experience (Abstract)

    NASA Astrophysics Data System (ADS)

    Cook, M.

    2015-12-01

    (Abstract only) In 2012, Lowell Observatory launched The Lowell Amateur Research Initiative (LARI) to formally involve amateur astronomers in scientific research by bringing them to the attention of and helping professional astronomers with their astronomical research. One of the LARI projects is the BVRI photometric monitoring of Young Stellar Objects (YSOs), wherein amateurs obtain observations to search for new outburst events and characterize the colour evolution of previously identified outbursters. A summary of the scientific and organizational aspects of this LARI project, including its goals and science motivation, the process for getting involved with the project, a description of the team members, their equipment and methods of collaboration, and an overview of the programme stars, preliminary findings, and lessons learned is presented.

  12. The National Digital Information Infrastructure Preservation Program; Metadata Principles and Practicalities; Challenges for Service Providers when Importing Metadata in Digital Libraries; Integrated and Aggregated Reference Services.

    ERIC Educational Resources Information Center

    Friedlander, Amy; Duval, Erik; Hodgins, Wayne; Sutton, Stuart; Weibel, Stuart L.; McClelland, Marilyn; McArthur, David; Giersch, Sarah; Geisler, Gary; Hodgkin, Adam

    2002-01-01

    Includes 6 articles that discuss the National Digital Information Infrastructure Preservation Program at the Library of Congress; metadata in digital libraries; integrated reference services on the Web. (LRW)

  13. Writing a successful research abstract.

    PubMed

    Bliss, Donna Z

    2012-01-01

    Writing and submitting a research abstract provides timely dissemination of the findings of a study and offers peer input for the subsequent development of a quality manuscript. Acceptance of abstracts is competitive. Understanding the expected content of an abstract, the abstract review process and tips for skillful writing will improve the chance of acceptance.

  14. Stellar Presentations (Abstract)

    NASA Astrophysics Data System (ADS)

    Young, D.

    2015-12-01

    (Abstract only) The AAVSO is in the process of expanding its education, outreach and speakers bureau program. powerpoint presentations prepared for specific target audiences such as AAVSO members, educators, students, the general public, and Science Olympiad teams, coaches, event supervisors, and state directors will be available online for members to use. The presentations range from specific and general content relating to stellar evolution and variable stars to specific activities for a workshop environment. A presentation—even with a general topic—that works for high school students will not work for educators, Science Olympiad teams, or the general public. Each audience is unique and requires a different approach. The current environment necessitates presentations that are captivating for a younger generation that is embedded in a highly visual and sound-bite world of social media, twitter and U-Tube, and mobile devices. For educators, presentations and workshops for themselves and their students must support the Next Generation Science Standards (NGSS), the Common Core Content Standards, and the Science Technology, Engineering and Mathematics (STEM) initiative. Current best practices for developing relevant and engaging powerpoint presentations to deliver information to a variety of targeted audiences will be presented along with several examples.

  15. Automated Supernova Discovery (Abstract)

    NASA Astrophysics Data System (ADS)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  16. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    SciTech Connect

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.

  17. Availability of Previously Unprocessed ALSEP Raw Instrument Data, Derivative Data, and Metadata Products

    NASA Technical Reports Server (NTRS)

    Nagihara, S.; Nakamura, Y.; Williams, D. R.; Taylor, P. T.; Kiefer, W. S.; Hager, M. A.; Hills, H. K.

    2016-01-01

    In year 2010, 440 original data archival tapes for the Apollo Lunar Science Experiment Package (ALSEP) experiments were found at the Washington National Records Center. These tapes hold raw instrument data received from the Moon for all the ALSEP instruments for the period of April through June 1975. We have recently completed extraction of binary files from these tapes, and we have delivered them to the NASA Space Science Data Cordinated Archive (NSSDCA). We are currently processing the raw data into higher order data products in file formats more readily usable by contemporary researchers. These data products will fill a number of gaps in the current ALSEP data collection at NSSDCA. In addition, we have estabilished a digital, searcheable archive of ALSEP document and metadata as part of the web portal of the Lunar and Planetary Institute. It currently holds approx. 700 documents totaling approx. 40,000 pages

  18. Quality in Learning Objects: Evaluating Compliance with Metadata Standards

    NASA Astrophysics Data System (ADS)

    Vidal, C. Christian; Segura, N. Alejandra; Campos, S. Pedro; Sánchez-Alonso, Salvador

    Ensuring a certain level of quality of learning objects used in e-learning is crucial to increase the chances of success of automated systems in recommending or finding these resources. This paper aims to present a proposal for implementation of a quality model for learning objects based on ISO 9126 international standard for the evaluation of software quality. Features indicators associated with the conformance sub-characteristic are defined. Some instruments for feature evaluation are advised, which allow collecting expert opinion on evaluation items. Other quality model features are evaluated using only the information from its metadata using semantic web technologies. Finally, we propose an ontology-based application that allows automatic evaluation of a quality feature. IEEE LOM metadata standard was used in experimentation, and the results shown that most of learning objects analyzed do not complain the standard.

  19. OntoSoft: An Ontology for Capturing Scientific Software Metadata

    NASA Astrophysics Data System (ADS)

    Gil, Y.

    2015-12-01

    We have developed OntoSoft, an ontology to describe metadata for scientific software. The ontology is designed considering how scientists would approach the reuse and sharing of software. This includes supporting a scientist to: 1) identify software, 2) understand and assess software, 3) execute software, 4) get support for the software, 5) do research with the software, and 6) update the software. The ontology is available in OWL and contains more than fifty terms. We have used OntoSoft to structure the OntoSoft software registry for geosciences, and to develop user interfaces to capture its metadata. OntoSoft is part of the NSF EarthCube initiative and contributes to its vision of scientific knowledge sharing, in this case about scientific software.

  20. A case for user-generated sensor metadata

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  1. CellML metadata standards, associated tools and repositories.

    PubMed

    Beard, Daniel A; Britten, Randall; Cooling, Mike T; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Lawson, James; Lloyd, Catherine M; Marsh, Justin; Miller, Andrew; Nickerson, David P; Nielsen, Poul M F; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M; Yu, Tommy

    2009-05-28

    The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website.

  2. Content-aware network storage system supporting metadata retrieval

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  3. Solar Dynamics Observatory Data Search using Metadata in the KDC

    NASA Astrophysics Data System (ADS)

    Hwang, E.; Choi, S.; Baek, J.-H.; Park, J.; Lee, J.; Cho, K.

    2015-09-01

    We have constructed the Korean Data Center (KDC) for the Solar Dynamics Observatory (SDO) in the Korea Astronomy and Space Science Institute (KASI). The SDO comprises three instruments; the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). We archive AIA and HMI FITS data. The size of data is about 1 TB of a day. The goal of KDC for SDO is to provide easy and fast access service to the data for researchers in Asia. In order to improve the data search rate, we designed the system to search data without going through a process of database query. The fields of instrument, wavelength, data path, date, and time are saved as a text file. This metadata file and SDO FITS data can be simply accessed via HTTP and are open to the public. We present a process of creating metadata and a way to access SDO FITS data in detail.

  4. CAMELOT: Cloud Archive for MEtadata, Library and Online Toolkit

    NASA Astrophysics Data System (ADS)

    Ginsburg, Adam; Kruijssen, J. M. Diederik; Longmore, Steven N.; Koch, Eric; Glover, Simon C. O.; Dale, James E.; Commerçon, Benoît; Giannetti, Andrea; McLeod, Anna F.; Testi, Leonardo; Zahorecz, Sarolta; Rathborne, Jill M.; Zhang, Qizhou; Fontani, Francesco; Beltrán, Maite T.; Rivilla, Victor M.

    2016-05-01

    CAMELOT facilitates the comparison of observational data and simulations of molecular clouds and/or star-forming regions. The central component of CAMELOT is a database summarizing the properties of observational data and simulations in the literature through pertinent metadata. The core functionality allows users to upload metadata, search and visualize the contents of the database to find and match observations/simulations over any range of parameter space. To bridge the fundamental disconnect between inherently 2D observational data and 3D simulations, the code uses key physical properties that, in principle, are straightforward for both observers and simulators to measure — the surface density (Sigma), velocity dispersion (sigma) and radius (R). By determining these in a self-consistent way for all entries in the database, it should be possible to make robust comparisons.

  5. CellML metadata standards, associated tools and repositories

    PubMed Central

    Beard, Daniel A.; Britten, Randall; Cooling, Mike T.; Garny, Alan; Halstead, Matt D.B.; Hunter, Peter J.; Lawson, James; Lloyd, Catherine M.; Marsh, Justin; Miller, Andrew; Nickerson, David P.; Nielsen, Poul M.F.; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M.; Yu, Tommy

    2009-01-01

    The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website. PMID:19380315

  6. Abstraction of Drift Seepage

    SciTech Connect

    J.T. Birkholzer

    2004-11-01

    This model report documents the abstraction of drift seepage, conducted to provide seepage-relevant parameters and their probability distributions for use in Total System Performance Assessment for License Application (TSPA-LA). Drift seepage refers to the flow of liquid water into waste emplacement drifts. Water that seeps into drifts may contact waste packages and potentially mobilize radionuclides, and may result in advective transport of radionuclides through breached waste packages [''Risk Information to Support Prioritization of Performance Assessment Models'' (BSC 2003 [DIRS 168796], Section 3.3.2)]. The unsaturated rock layers overlying and hosting the repository form a natural barrier that reduces the amount of water entering emplacement drifts by natural subsurface processes. For example, drift seepage is limited by the capillary barrier forming at the drift crown, which decreases or even eliminates water flow from the unsaturated fractured rock into the drift. During the first few hundred years after waste emplacement, when above-boiling rock temperatures will develop as a result of heat generated by the decay of the radioactive waste, vaporization of percolation water is an additional factor limiting seepage. Estimating the effectiveness of these natural barrier capabilities and predicting the amount of seepage into drifts is an important aspect of assessing the performance of the repository. The TSPA-LA therefore includes a seepage component that calculates the amount of seepage into drifts [''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504], Section 6.3.3.1)]. The TSPA-LA calculation is performed with a probabilistic approach that accounts for the spatial and temporal variability and inherent uncertainty of seepage-relevant properties and processes. Results are used for subsequent TSPA-LA components that may handle, for example, waste package corrosion or radionuclide transport.

  7. Abstracts and reviews.

    PubMed

    Liebmann, G H; Wollman, L; Woltmann, A G

    1966-09-01

    Abstract Eric Berne, M.D.: Games People Play. Grove Press, New York, 1964. 192 pages. Price $5.00. Reviewed by Hugo G. Beigel Finkle, Alex M., Ph.D., M.D. and Prian, Dimitry F. Sexual Potency in Elderly Men before and after Prostatectomy. J.A.M.A., 196: 2, April, 1966. Reviewed by H. George Liebman Calvin C. Hernton: Sex and Racism In America. Grove Press, Inc. Black Cat Edition No. 113 (Paperback), 1966, 180 pp. Price $.95. Reviewed by Gus Woltmann Hans Lehfeldt, M.D., Ernest W. Kulka, M.D., H. George Liebman, M.D.: Comparative Study of Uterine Contraceptive Devices. Obstetrics and Gynecology, 26: 5, 1965, pp. 679-688. Lawrence Lipton. The Erotic Revolution. Sherbourne Press, Los Angeles, 1965. 322 pp., Price $7.50. Masters, William H., M.D. and Johnson, Virginia E. Human Sexual Response. Boston: Little, Brown and Co., 1966. 366 pages. Price $.10.00. Reviewed by Hans Lehfeldt Douglas P. Murphy, M.D. and Editha F. Torrano, M.D. Male Fertility in 3620 Childless Couples. Fertility and Sterility, 16: 3, May-June, 1965. Reviewed by Leo Wollman, M.D. Edwin M. Schur, Editor: The Family and the Sexual Revolution, Indiana University Press, Bloomington, Indiana, 1964. 427 pgs. Weldon, Virginia F., M.D., Blizzard, Robert M., M.D., and Migeon, Claude, M.D. Newborn Girls Misdiagnosed as Bilaterally Chryptorchid Males. The New England Journal of Medicine, April 14, 1966. Reviewed by H. George Liebman.

  8. Discovery of Marine Datasets and Geospatial Metadata Visualization

    NASA Astrophysics Data System (ADS)

    Schwehr, K. D.; Brennan, R. T.; Sellars, J.; Smith, S.

    2009-12-01

    NOAA's National Geophysical Data Center (NGDC) provides the deep archive of US multibeam sonar hydrographic surveys. NOAA stores the data as Bathymetric Attributed Grids (BAG; http://www.opennavsurf.org/) that are HDF5 formatted files containing gridded bathymetry, gridded uncertainty, and XML metadata. While NGDC provides the deep store and a basic ERSI ArcIMS interface to the data, additional tools need to be created to increase the frequency with which researchers discover hydrographic surveys that might be beneficial for their research. Using Open Source tools, we have created a draft of a Google Earth visualization of NOAA's complete collection of BAG files as of March 2009. Each survey is represented as a bounding box, an optional preview image of the survey data, and a pop up placemark. The placemark contains a brief summary of the metadata and links to directly download of the BAG survey files and the complete metadata file. Each survey is time tagged so that users can search both in space and time for surveys that meet their needs. By creating this visualization, we aim to make the entire process of data discovery, validation of relevance, and download much more efficient for research scientists who may not be familiar with NOAA's hydrographic survey efforts or the BAG format. In the process of creating this demonstration, we have identified a number of improvements that can be made to the hydrographic survey process in order to make the results easier to use especially with respect to metadata generation. With the combination of the NGDC deep archiving infrastructure, a Google Earth virtual globe visualization, and GeoRSS feeds of updates, we hope to increase the utilization of these high-quality gridded bathymetry. This workflow applies equally well to LIDAR topography and bathymetry. Additionally, with proper referencing and geotagging in journal publications, we hope to close the loop and help the community create a true “Geospatial Scholar

  9. The Planetary Data System Information Model for Geometry Metadata

    NASA Astrophysics Data System (ADS)

    Guinness, E. A.; Gordon, M. K.

    2014-12-01

    The NASA Planetary Data System (PDS) has recently developed a new set of archiving standards based on a rigorously defined information model. An important part of the new PDS information model is the model for geometry metadata, which includes, for example, attributes of the lighting and viewing angles of observations, position and velocity vectors of a spacecraft relative to Sun and observing body at the time of observation and the location and orientation of an observation on the target. The PDS geometry model is based on requirements gathered from the planetary research community, data producers, and software engineers who build search tools. A key requirement for the model is that it fully supports the breadth of PDS archives that include a wide range of data types from missions and instruments observing many types of solar system bodies such as planets, ring systems, and smaller bodies (moons, comets, and asteroids). Thus, important design aspects of the geometry model are that it standardizes the definition of the geometry attributes and provides consistency of geometry metadata across planetary science disciplines. The model specification also includes parameters so that the context of values can be unambiguously interpreted. For example, the reference frame used for specifying geographic locations on a planetary body is explicitly included with the other geometry metadata parameters. The structure and content of the new PDS geometry model is designed to enable both science analysis and efficient development of search tools. The geometry model is implemented in XML, as is the main PDS information model, and uses XML schema for validation. The initial version of the geometry model is focused on geometry for remote sensing observations conducted by flyby and orbiting spacecraft. Future releases of the PDS geometry model will be expanded to include metadata for landed and rover spacecraft.

  10. Standardizing metadata and taxonomic identification in metabarcoding studies.

    PubMed

    Tedersoo, Leho; Ramirez, Kelly S; Nilsson, R Henrik; Kaljuvee, Aivi; Kõljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering data submission, compression, storage and public access to allow easy data retrieval and inter-study communication. Such standardized and readily accessible datasets facilitate data management, taxonomic comparisons and compilation of global metastudies. PMID:26236474

  11. The CellML Metadata Framework 2.0 Specification.

    PubMed

    Cooling, Michael T; Hunter, Peter

    2015-01-01

    The CellML Metadata Framework 2.0 is a modular framework that describes how semantic annotations should be made about mathematical models encoded in the CellML (www.cellml.org) format, and their elements. In addition to the Core specification, there are several satellite specifications, each designed to cater for model annotation in a different context. Basic Model Information, Citation, License and Biological Annotation specifications are presented. PMID:26528558

  12. Observations Metadata Database at the European Southern Observatory

    NASA Astrophysics Data System (ADS)

    Dobrzycki, A.; Brandt, D.; Giot, D.; Lockhart, J.; Rodriguez, J.; Rossat, N.; Vuong, M. H.

    2007-10-01

    We describe the design of the new system for handling observations metadata at the Science Archive Facility of the European Southern Observatory using Sybase IQ. The current system, based primarily on Sybase ASE, allows direct access to a subset of observation parameters. The new system will allow for the browsing of Archive contents using searches on any parameter, for off-line updates on all parameters and for the on-the-fly introduction of those updates on files retrieved from the Archive.

  13. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    NASA Astrophysics Data System (ADS)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  14. ClipCard: Sharable, Searchable Visual Metadata Summaries on the Cloud to Render Big Data Actionable

    NASA Astrophysics Data System (ADS)

    Saripalli, P.; Davis, D.; Cunningham, R.

    2013-12-01

    Research firm IDC estimates that approximately 90 percent of the Enterprise Big Data go un-analyzed, as 'dark data' - an enormous corpus of undiscovered, untagged information residing on data warehouses, servers and Storage Area Networks (SAN). In the geosciences, these data range from unpublished model runs to vast survey data assets to raw sensor data. Many of these are now being collected instantaneously, at a greater volume and in new data formats. Not all of these data can be analyzed, nor processed in real time, and their features may not be well described at the time of collection. These dark data are a serious data management problem for science organizations of all types, especially ones with mandated or required data reporting and compliance requirements. Additionally, data curators and scientists are encouraged to quantify the impact of their data holdings as a way to measure research success. Deriving actionable insights is the foremost goal of Big Data Analytics (BDA), which is especially true with geoscience, given its direct impact on most of the pressing global issues. Clearly, there is a pressing need for innovative approaches to making dark data discoverable, measurable, and actionable. We report on ClipCard, a Cloud-based SaaS analytic platform for instant summarization, quick search, visualization and easy sharing of metadata summaries form the Dark Data at hierarchical levels of detail, thus rendering it 'white', i.e., actionable. We present a use case of the ClipCard platform, a cloud-based application which helps generate (abstracted) visual metadata summaries and meta-analytics for environmental data at hierarchical scales within and across big data containers. These summaries and analyses provide important new tools for managing big data and simplifying collaboration through easy to deploy sharing APIs. The ClipCard application solves a growing data management bottleneck by helping enterprises and large organizations to summarize, search

  15. A Common Model To Support Interoperable Metadata: Progress Report on Reconciling Metadata Requirements from the Dublin Core and INDECS/DOI Communities.

    ERIC Educational Resources Information Center

    Bearman, David; Rust, Godfrey; Weibel, Stuart; Miller, Eric; Trant, Jennifer

    1999-01-01

    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities.…

  16. Precision Pointing Reconstruction and Geometric Metadata Generation for Cassini Images

    NASA Astrophysics Data System (ADS)

    French, Robert S.; Showalter, Mark R.; Gordon, Mitchell K.

    2014-11-01

    Analysis of optical remote sensing (ORS) data from the Cassini spacecraft is a complicated and labor-intensive process. First, small errors in Cassini’s pointing information (up to ~40 pixels for the Imaging Science Subsystem Narrow Angle Camera) must be corrected so that the line of sight vector for each pixel is known. This process involves matching the image contents with known features such as stars, ring edges, or moon limbs. Second, metadata for each pixel must be computed. Depending on the object under observation, this metadata may include lighting geometry, moon or planet latitude and longitude, and/or ring radius and longitude. Both steps require mastering the SPICE toolkit, a highly capable piece of software with a steep learning curve. Only after these steps are completed can the actual scientific investigation begin.We are embarking on a three-year project to perform these steps for all 300,000+ Cassini ISS images as well as images taken by the VIMS, UVIS, and CIRS instruments. The result will be a series of SPICE kernels that include accurate pointing information and a series of backplanes that include precomputed metadata for each pixel. All data will be made public through the PDS Rings Node (http://www.pds-rings.seti.org). We expect this project to dramatically decrease the time required for scientists to analyze Cassini data. In this poster we discuss the project, our current status, and our plans for the next three years.

  17. DicomBrowser: software for viewing and modifying DICOM metadata.

    PubMed

    Archie, Kevin A; Marcus, Daniel S

    2012-10-01

    Digital Imaging and Communications in Medicine (DICOM) is the dominant standard for medical imaging data. DICOM-compliant devices and the data they produce are generally designed for clinical use and often do not match the needs of users in research or clinical trial settings. DicomBrowser is software designed to ease the transition between clinically oriented DICOM tools and the specialized workflows of research imaging. It supports interactive loading and viewing of DICOM images and metadata across multiple studies and provides a rich and flexible system for modifying DICOM metadata. Users can make ad hoc changes in a graphical user interface, write metadata modification scripts for batch operations, use partly automated methods that guide users to modify specific attributes, or combine any of these approaches. DicomBrowser can save modified objects as local files or send them to a DICOM storage service using the C-STORE network protocol. DicomBrowser is open-source software, available for download at http://nrg.wustl.edu/software/dicom-browser. PMID:22349992

  18. Metadata from data: identifying holidays from anesthesia data.

    PubMed

    Starnes, Joseph R; Wanderer, Jonathan P; Ehrenfeld, Jesse M

    2015-05-01

    The increasingly large databases available to researchers necessitate high-quality metadata that is not always available. We describe a method for generating this metadata independently. Cluster analysis and expectation-maximization were used to separate days into holidays/weekends and regular workdays using anesthesia data from Vanderbilt University Medical Center from 2004 to 2014. This classification was then used to describe differences between the two sets of days over time. We evaluated 3802 days and correctly categorized 3797 based on anesthesia case time (representing an error rate of 0.13%). Use of other metrics for categorization, such as billed anesthesia hours and number of anesthesia cases per day, led to similar results. Analysis of the two categories showed that surgical volume increased more quickly with time for non-holidays than holidays (p < 0.001). We were able to successfully generate metadata from data by distinguishing holidays based on anesthesia data. This data can then be used for economic analysis and scheduling purposes. It is possible that the method can be expanded to similar bimodal and multimodal variables.

  19. Accepted scientific research works (abstracts).

    PubMed

    2014-01-01

    These are the 39 accepted abstracts for IAYT's Symposium on Yoga Research (SYR) September 24-24, 2014 at the Kripalu Center for Yoga & Health and published in the Final Program Guide and Abstracts. PMID:25645134

  20. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  1. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into

  2. Using abstract language signals power.

    PubMed

    Wakslak, Cheryl J; Smith, Pamela K; Han, Albert

    2014-07-01

    Power can be gained through appearances: People who exhibit behavioral signals of power are often treated in a way that allows them to actually achieve such power (Ridgeway, Berger, & Smith, 1985; Smith & Galinsky, 2010). In the current article, we examine power signals within interpersonal communication, exploring whether use of concrete versus abstract language is seen as a signal of power. Because power activates abstraction (e.g., Smith & Trope, 2006), perceivers may expect higher power individuals to speak more abstractly and therefore will infer that speakers who use more abstract language have a higher degree of power. Across a variety of contexts and conversational subjects in 7 experiments, participants perceived respondents as more powerful when they used more abstract language (vs. more concrete language). Abstract language use appears to affect perceived power because it seems to reflect both a willingness to judge and a general style of abstract thinking.

  3. Metadata research and design of ocean color remote sensing data based on web service

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Pan, Delu; He, Xianqiang; Wang, Difeng; Chen, Jianyu

    2010-10-01

    The ocean color remote sensing metadata describes the content, quality, condition, and other characteristics of ocean color remote sensing data. Paper presents a metadata standard draft based on XML, and gives the details of main ocean color remote sensing metadata XML elements. The ocean color remote sensing data platform-sharing is in developments as a part of the digital ocean system, on this basis, the ocean color remote sensing metadata directory service system based on web service is put forward, which aims to store and manage the ocean color remote sensing metadata effectively. The metadata of the ocean color remote sensing data become the most important event for the ocean color remote sensing information more retrieved and used.

  4. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  5. Metadata distribution algorithm based on directory hash in mass storage system

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Luo, Dong-jian; Pei, Can-hao

    2008-12-01

    The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.

  6. Development of Automated Signal and Meta-data Quality Assessment at the USGS ANSS NOC

    NASA Astrophysics Data System (ADS)

    McNamara, D.; Buland, R.; Boaz, R.; Benz, H.; Gee, L.; Leith, W.

    2007-05-01

    the long-term station noise envelopes. Operationally, the software is useful for characterizing the current and past performance of existing broadband stations, for conducting tests on potential new seismic station locations, for detecting problems with the recording system or sensors, and for evaluating the overall quality of data and meta-data. Currently, PQLX is operational at the USGS for station performance monitoring and metadata quality control.

  7. Experiences with making diffraction image data available: what metadata do we need to archive?

    PubMed Central

    Kroon-Batenburg, Loes M. J.; Helliwell, John R.

    2014-01-01

    Recently, the IUCr (International Union of Crystallography) initiated the formation of a Diffraction Data Deposition Working Group with the aim of developing standards for the representation of raw diffraction data associated with the publication of structural papers. Archiving of raw data serves several goals: to improve the record of science, to verify the reproducibility and to allow detailed checks of scientific data, safeguarding against fraud and to allow reanalysis with future improved techniques. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. In a recent study of the binding of cisplatin and carboplatin to histidine in lysozyme crystals under several conditions, the possible effects of the equipment and X-ray diffraction data-processing software on the occupancies and B factors of the bound Pt compounds were compared. Initially, 35.3 GB of data were transferred from Manchester to Utrecht to be processed with EVAL. A detailed description and discussion of the availability of metadata was published in a paper that was linked to a local raw data archive at Utrecht University and also mirrored at the TARDIS raw diffraction data archive in Australia. By making these raw diffraction data sets available with the article, it is possible for the diffraction community to make their own evaluation. This led to one of the authors of XDS (K. Diederichs) to re-integrate the data from crystals that supposedly solely contained bound carboplatin, resulting in the analysis of partially occupied chlorine anomalous electron densities near the Pt-binding sites and the use of several criteria to more carefully assess the diffraction resolution limit. General arguments for archiving raw data, the possibilities of doing so and the requirement of resources are discussed. The problems associated with a partially unknown experimental setup, which preferably should be available as metadata, is discussed. Current thoughts on

  8. Experiences with making diffraction image data available: what metadata do we need to archive?

    SciTech Connect

    Kroon-Batenburg, Loes M. J.; Helliwell, John R.

    2014-10-01

    A local raw ‘diffraction data images’ archive was made available and some data sets were retrieved and reprocessed, which led to analysis of the anomalous difference densities of two partially occupied Cl atoms in cisplatin as well as a re-evaluation of the resolution cutoff in these diffraction data. General questions on storing raw data are discussed. It is also demonstrated that often one needs unambiguous prior knowledge to read the (binary) detector format and the setup of goniometer geometries. Recently, the IUCr (International Union of Crystallography) initiated the formation of a Diffraction Data Deposition Working Group with the aim of developing standards for the representation of raw diffraction data associated with the publication of structural papers. Archiving of raw data serves several goals: to improve the record of science, to verify the reproducibility and to allow detailed checks of scientific data, safeguarding against fraud and to allow reanalysis with future improved techniques. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. In a recent study of the binding of cisplatin and carboplatin to histidine in lysozyme crystals under several conditions, the possible effects of the equipment and X-ray diffraction data-processing software on the occupancies and B factors of the bound Pt compounds were compared. Initially, 35.3 GB of data were transferred from Manchester to Utrecht to be processed with EVAL. A detailed description and discussion of the availability of metadata was published in a paper that was linked to a local raw data archive at Utrecht University and also mirrored at the TARDIS raw diffraction data archive in Australia. By making these raw diffraction data sets available with the article, it is possible for the diffraction community to make their own evaluation. This led to one of the authors of XDS (K. Diederichs) to re-integrate the data from crystals that supposedly

  9. Experiences with making diffraction image data available: what metadata do we need to archive?

    PubMed

    Kroon-Batenburg, Loes M J; Helliwell, John R

    2014-10-01

    Recently, the IUCr (International Union of Crystallography) initiated the formation of a Diffraction Data Deposition Working Group with the aim of developing standards for the representation of raw diffraction data associated with the publication of structural papers. Archiving of raw data serves several goals: to improve the record of science, to verify the reproducibility and to allow detailed checks of scientific data, safeguarding against fraud and to allow reanalysis with future improved techniques. A means of studying this issue is to submit exemplar publications with associated raw data and metadata. In a recent study of the binding of cisplatin and carboplatin to histidine in lysozyme crystals under several conditions, the possible effects of the equipment and X-ray diffraction data-processing software on the occupancies and B factors of the bound Pt compounds were compared. Initially, 35.3 GB of data were transferred from Manchester to Utrecht to be processed with EVAL. A detailed description and discussion of the availability of metadata was published in a paper that was linked to a local raw data archive at Utrecht University and also mirrored at the TARDIS raw diffraction data archive in Australia. By making these raw diffraction data sets available with the article, it is possible for the diffraction community to make their own evaluation. This led to one of the authors of XDS (K. Diederichs) to re-integrate the data from crystals that supposedly solely contained bound carboplatin, resulting in the analysis of partially occupied chlorine anomalous electron densities near the Pt-binding sites and the use of several criteria to more carefully assess the diffraction resolution limit. General arguments for archiving raw data, the possibilities of doing so and the requirement of resources are discussed. The problems associated with a partially unknown experimental setup, which preferably should be available as metadata, is discussed. Current thoughts on

  10. Design and Practice on Metadata Service System of Surveying and Mapping Results Based on Geonetwork

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Zhou, X.

    2011-08-01

    Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for distributed deployment

  11. Semantic technologies improving the recall and precision of the Mercury metadata search engine

    NASA Astrophysics Data System (ADS)

    Pouchard, L. C.; Cook, R. B.; Green, J.; Palanisamy, G.; Noy, N.

    2011-12-01

    The Mercury federated metadata system [1] was developed at the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC), a NASA-sponsored effort holding datasets about biogeochemical dynamics, ecological data, and environmental processes. Mercury currently indexes over 100,000 records from several data providers conforming to community standards, e.g. EML, FGDC, FGDC Biological Profile, ISO 19115 and DIF. With the breadth of sciences represented in Mercury, the potential exists to address some key interdisciplinary scientific challenges related to climate change, its environmental and ecological impacts, and mitigation of these impacts. However, this wealth of metadata also hinders pinpointing datasets relevant to a particular inquiry. We implemented a semantic solution after concluding that traditional search approaches cannot improve the accuracy of the search results in this domain because: a) unlike everyday queries, scientific queries seek to return specific datasets with numerous parameters that may or may not be exposed to search (Deep Web queries); b) the relevance of a dataset cannot be judged by its popularity, as each scientific inquiry tends to be unique; and c)each domain science has its own terminology, more or less curated, consensual, and standardized depending on the domain. The same terms may refer to different concepts across domains (homonyms), but different terms mean the same thing (synonyms). Interdisciplinary research is arduous because an expert in a domain must become fluent in the language of another, just to find relevant datasets. Thus, we decided to use scientific ontologies because they can provide a context for a free-text search, in a way that string-based keywords never will. With added context, relevant datasets are more easily discoverable. To enable search and programmatic access to ontology entities in Mercury, we are using an instance of the BioPortal ontology repository. Mercury accesses ontology entities

  12. Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example

    SciTech Connect

    Devarakonda, Ranjeet; Hook, Leslie A; Killeffer, Terri S; Krassovski, Misha B; Boden, Thomas A; Wullschleger, Stan D

    2015-01-01

    The Online Metadata Editor (OME) is a web-based tool to help document scientific data in a well-structured, popular scientific metadata format. In this paper, we will discuss the newest tool that Oak Ridge National Laboratory (ORNL) has developed to generate, edit, and manage metadata and how it is helping data-intensive science centers and projects, such as the U.S. Department of Energy s Next Generation Ecosystem Experiments (NGEE) in the Arctic to prepare metadata and make their big data produce big science and lead to new discoveries.

  13. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  14. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  15. Empowering Earth Science Communities to Share Data Through Guided Metadata Improvement

    NASA Astrophysics Data System (ADS)

    Powers, L. A.; Habermann, T.; Jones, M. B.; Gordon, S.

    2015-12-01

    Earth Science communities can improve the discoverability, use and understanding of their data by improving the completeness and consistency of their metadata. Despite the potential for a great payoff, resources to invest in this work are often limited. We are working with diverse earth science communities to quantitatively evaluate their metadata and to identify specific strategies to improve the completeness and consistency of their metadata. We have developed an iterative, guided process intended to efficiently improve metadata to better serve their own communities, as well as share data across disciplines. The community specific approach focuses on community metadata requirements, and also provides guidance on adding other metadata concepts to expand the effectiveness of metadata for multiple uses, including data discovery, data understanding, and data re-use. We will present the results of a baseline analysis of more than 25 diverse metadata collections from established data repositories representing communities across the earth and environmental sciences. The baseline analysis describes the current state of the metadata in these collections and highlights areas for improvement. We compare these collections to demonstrate exemplar practitioners that can provide guidance to other communities.

  16. Studies of Big Data metadata segmentation between relational and non-relational databases

    NASA Astrophysics Data System (ADS)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  17. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    NASA Astrophysics Data System (ADS)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  18. Abstract shape analysis of RNA.

    PubMed

    Janssen, Stefan; Giegerich, Robert

    2014-01-01

    Abstract shape analysis abstract shape analysis is a method to learn more about the complete Boltzmann ensemble of the secondary structures of a single RNA molecule. Abstract shapes classify competing secondary structures into classes that are defined by their arrangement of helices. It allows us to compute, in addition to the structure of minimal free energy, a set of structures that represents relevant and interesting structural alternatives. Furthermore, it allows to compute probabilities of all structures within a shape class. This allows to ensure that our representative subset covers the complete Boltzmann ensemble, except for a portion of negligible probability. This chapter explains the main functions of abstract shape analysis, as implemented in the tool RNA shapes. RNA shapes It reports on some other types of analysis that are based on the abstract shapes idea and shows how you can solve novel problems by creating your own shape abstractions.

  19. A Common Metadata System for Marine Data Portals

    NASA Astrophysics Data System (ADS)

    Wosniok, C.; Breitbach, G.; Lehfeldt, R.

    2012-04-01

    ), Web Feature Service (WFS) and Sensor Observation Service (SOS), which ensures interoperability and extensibility. In addition, metadata as crucial components for searching and finding information in large data infrastructures is provided via the Catalogue Web Service (CS-W). MDI-DE and COSYNA rely on the metadata information system for marine metadata NOKIS, which reflects a metadata profile tailored for marine data according to the specifications of German coastal authorities. In spite of this common software base, interoperability between the two data collections requires constant alignments of the diverse data processed by the two portals. While monitoring data in the MDI-DE is currently rather campaign-based, COSYNA has to fit constantly evolving time series into metadata sets. With all data following the same metadata profile, we now reach full interoperability between the different data collections. The distributed marine information system provides options to search, find and visualise the harmonised results from continuous monitoring, field campaigns, numerical modeling and other data in one web client.

  20. Mechanical Engineering Department technical abstracts

    SciTech Connect

    Denney, R.M.

    1982-07-01

    The Mechanical Engineering Department publishes listings of technical abstracts twice a year to inform readers of the broad range of technical activities in the Department, and to promote an exchange of ideas. Details of the work covered by an abstract may be obtained by contacting the author(s). Overall information about current activities of each of the Department's seven divisions precedes the technical abstracts.

  1. Data and Metadata Management at the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Holt, J. M.; Mader, J. A.; Tran, H. D.; Goodrich, R. W.; Gelino, C. R.; Laity, A. C.; Kong, M.; Swain, M. A.

    2015-09-01

    A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The data from Keck are not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. The data pose a number of challenges for KOA: different instrument builders used different standards, and the nature of classical observing, where observers have complete control of the instruments and their observations, lead to heterogeneous data sets. For example, it is often difficult to determine if an observation is a science target, a sky frame, or a sky flat. It is also necessary to assign the data to the correct owners and observing programs, which can be a challenge for time-domain and target-of-opportunity observations, or on split nights, during which two or more principle investigators share a given night. In addition, having uniform and adequate calibrations is important for the proper reduction of data. Therefore, KOA needs to distinguish science files from calibration files, identify the type of calibrations available, and associate the appropriate calibration files with each science frame. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and “retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the archive was designed. With the expertise gained from having successfully archived observations taken with all eight currently active instruments at WMKO, we have developed lessons learned from handling this complex array of heterogeneous metadata. These lessons help ensure a smooth ingestion of data not only for current but also future instruments

  2. Metadata and data management for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Tran, H. D.; Holt, J.; Goodrich, R. W.; Mader, J. A.; Swain, M.; Laity, A. C.; Kong, M.; Gelino, C. R.; Berriman, G. B.

    2014-07-01

    A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive observing data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The observing data from Keck is not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. Coupled with different standards among instrument builders and the heterogeneous nature of the data inherent in classical observing, in which observers have complete control of the instruments and their observations, the data pose a number of technical challenges for KOA. For example, it is often difficult to determine if an observation is a science target, a sky frame, or a sky flat. It is also necessary to assign the data to the correct owners and observing programs, which can be a challenge for time-domain and target-of-opportunity observations, or on split nights, during which two or more principle investigators share a given night. In addition, having uniform and adequate calibrations are important for the proper reduction of data. Therefore, KOA needs to distinguish science files from calibration files, identify the type of calibrations available, and associate the appropriate calibration files with each science frame. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and "retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the archive was designed. With the expertise gained from having successfully archived observations taken with all eight currently active instruments at WMKO, we have developed lessons learned from handling this complex array of heterogeneous metadata that help ensure a smooth ingestion of data not only for current

  3. Capturing Sensor Metadata for Cross-Domain Interoperability

    NASA Astrophysics Data System (ADS)

    Fredericks, J.

    2015-12-01

    Envision a world where a field operator turns on an instrument, and is queried for information needed to create standardized encoded descriptions that, together with the sensor manufacturer knowledge, fully describe the capabilities, limitations and provenance of observational data. The Cross-Domain Observational Metadata Environmental Sensing Network (X-DOMES) pilot project (with support from the NSF/EarthCube IA) is taking the first steps needed in realizing this vision. The knowledge of how an observable physical property becomes a measured observation must be captured at each stage of its creation. Each sensor-based observation is made through the use of applied technologies, each with specific limitations and capabilities. Environmental sensors typically provide a variety of options that can be configured differently for each unique deployment, affecting the observational results. By capturing the information (metadata) at each stage of its generation, a more complete and accurate description of data provenance can be communicated. By documenting the information in machine-harvestable, standards-based encodings, metadata can be shared across disciplinary and geopolitical boundaries. Using standards-based frameworks enables automated harvesting and translation to other community-adopted standards, which facilitates the use of shared tools and workflows. The establishment of a cross-domain network of stakeholders (sensor manufacturers, data providers, domain experts, data centers), called the X-DOMES Network, provides a unifying voice for the specification of content and implementation of standards, as well as a central repository for sensor profiles, vocabularies, guidance and product vetting. The ability to easily share fully described observational data provides a better understanding of data provenance and enables the use of common data processing and assessment workflows, fostering a greater trust in our shared global resources. The X-DOMES Network

  4. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    PubMed

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  5. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  6. Observation metadata handling system at the European Southern Observatory

    NASA Astrophysics Data System (ADS)

    Dobrzycki, Adam; Brandt, Daniel; Giot, David; Lockhart, John; Rodriguez, Jesus; Rossat, Nathalie; Vuong, My Hà

    2006-06-01

    We present the design of the system for handling observations metadata at the Science Archive Facility of the European Southern Observatory using Sybase ASE, Replication Server and Sybase IQ. The system has been reengineered to enhance the browsing capabilities of Archive contents using searches on any observation parameter, for on-line updates on all parameters and for the on-the-fly introduction of those updates in files retrieved from the Archive. The systems also reduces the replication of duplicate information and simplifies database maintenance.

  7. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    NASA Astrophysics Data System (ADS)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  8. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  9. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for

  10. openPDS: protecting the privacy of metadata through SafeAnswers.

    PubMed

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  11. Metadata Design in the New PDS4 Standards - Something for Everybody

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  12. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    PubMed Central

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S.; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  13. Innovation Abstracts; Volume XIV, 1992.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1992-01-01

    This series of 30 one- to two-page abstracts covering 1992 highlights a variety of innovative approaches to teaching and learning in the community college. Topics covered in the abstracts include: (1) faculty recognition and orientation; (2) the Amado M. Pena, Jr., Scholarship Program; (3) innovative teaching techniques, with individual abstracts…

  14. Innovation Abstracts, Volume XV, 1993.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1993-01-01

    This volume of 30 one- to two-page abstracts from 1993 highlights a variety of innovative approaches to teaching and learning in the community college. Topics covered in the abstracts include: (1) role-playing to encourage critical thinking; (2) team learning techniques to cultivate business skills; (3) librarian-instructor partnerships to create…

  15. Leadership Abstracts; Volume 4, 1991.

    ERIC Educational Resources Information Center

    Doucette, Don, Ed.

    1991-01-01

    "Leadership Abstracts" is published bimonthly and distributed to the chief executive officer of every two-year college in the United States and Canada. This document consists of the 15 one-page abstracts published in 1991. Addressing a variety of topics of interest to the community college administrators, this volume includes: (1) "Delivering the…

  16. Student Success with Abstract Art

    ERIC Educational Resources Information Center

    Hamidou, Kristine

    2009-01-01

    An abstract art project can be challenging or not, depending on the objectives the teacher sets up. In this article, the author describes an abstract papier-mache project that is a success for all students, and is a versatile project easily manipulated to suit the classroom of any art teacher.

  17. Abstraction in perceptual symbol systems.

    PubMed Central

    Barsalou, Lawrence W

    2003-01-01

    After reviewing six senses of abstraction, this article focuses on abstractions that take the form of summary representations. Three central properties of these abstractions are established: ( i ) type-token interpretation; (ii) structured representation; and (iii) dynamic realization. Traditional theories of representation handle interpretation and structure well but are not sufficiently dynamical. Conversely, connectionist theories are exquisitely dynamic but have problems with structure. Perceptual symbol systems offer an approach that implements all three properties naturally. Within this framework, a loose collection of property and relation simulators develops to represent abstractions. Type-token interpretation results from binding a property simulator to a region of a perceived or simulated category member. Structured representation results from binding a configuration of property and relation simulators to multiple regions in an integrated manner. Dynamic realization results from applying different subsets of property and relation simulators to category members on different occasions. From this standpoint, there are no permanent or complete abstractions of a category in memory. Instead, abstraction is the skill to construct temporary online interpretations of a category's members. Although an infinite number of abstractions are possible, attractors develop for habitual approaches to interpretation. This approach provides new ways of thinking about abstraction phenomena in categorization, inference, background knowledge and learning. PMID:12903648

  18. Food Science and Technology Abstracts.

    ERIC Educational Resources Information Center

    Cohen, Elinor; Federman, Joan

    1979-01-01

    Introduces the reader to the Food Science and Technology Abstracts, a data file that covers worldwide literature on human food commodities and aspects of food processing. Topics include scope, subject index, thesaurus, searching online, and abstracts; tables provide a comparison of ORBIT and DIALOG versions of the file. (JD)

  19. Technical abstracts: Mechanical engineering, 1990

    SciTech Connect

    Broesius, J.Y.

    1991-03-01

    This document is a compilation of the published, unclassified abstracts produced by mechanical engineers at Lawrence Livermore National Laboratory (LLNL) during the calendar year 1990. Many abstracts summarize work completed and published in report form. These are UCRL-JC series documents, which include the full text of articles to be published in journals and of papers to be presented at meetings, and UCID reports, which are informal documents. Not all UCIDs contain abstracts: short summaries were generated when abstracts were not included. Technical Abstracts also provides descriptions of those documents assigned to the UCRL-MI (miscellaneous) category. These are generally viewgraphs or photographs presented at meetings. An author index is provided at the back of this volume for cross referencing.

  20. Metaphor: Bridging embodiment to abstraction.

    PubMed

    Jamrozik, Anja; McQuire, Marguerite; Cardillo, Eileen R; Chatterjee, Anjan

    2016-08-01

    Embodied cognition accounts posit that concepts are grounded in our sensory and motor systems. An important challenge for these accounts is explaining how abstract concepts, which do not directly call upon sensory or motor information, can be informed by experience. We propose that metaphor is one important vehicle guiding the development and use of abstract concepts. Metaphors allow us to draw on concrete, familiar domains to acquire and reason about abstract concepts. Additionally, repeated metaphoric use drawing on particular aspects of concrete experience can result in the development of new abstract representations. These abstractions, which are derived from embodied experience but lack much of the sensorimotor information associated with it, can then be flexibly applied to understand new situations. PMID:27294425

  1. Abstracts.

    PubMed

    Gandelman, Kuan; Lamson, Michael; Bramson, Candace; Matschke, Kyle; Salageanu, Joanne; Malhotra, Bimal

    2015-09-01

    ALO-02 capsules (ALO-02) contain pellets that consist of extended-release oxycodone that surrounds sequestered naltrexone. The primary objective was to characterize the pharmacokinetics (PK) of oxycodone following single- and multiple-dose oral administration of ALO-02 40 mg BID in healthy volunteers. Secondary objectives were to characterize (1) the PK of oxycodone following single- and multiple-dose administration of a comparator OxyContin (OXY-ER) 40 mg BID as well as an alternate regimen of ALO-02 80 mg QD, and (2) the safety and tolerability assessments. Healthy volunteers received three treatments on a background of oral naltrexone (50 mg). Noncompartmental PK parameters were calculated for oxycodone. All 12 subjects were male with a mean age (SD, range) of 44.6 years (7.6, 25-55). Single-dose PK results for ALO-02 indicate that median peak plasma oxycodone concentrations were reached by 12 hours compared to 4 hours for OXY-ER. Compared to OXY-ER, mean dose-normalized, single-dose Cmax values were approximately 27% and 23% lower for ALO-02 40 mg BID and ALO-02 80 mg QD treatments, respectively. Following multiple doses all treatments reached steady state by 3 days. At steady state, oxycodone peak-to-trough fluctuation was significantly lower for ALO-02 BID versus OXY-ER. Adverse events were consistent with opioid therapy. ALO-02 40 mg BID treatment provided a PK profile appropriate for around-the-clock treatment of chronic pain. PMID:27137145

  2. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    NASA Astrophysics Data System (ADS)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  3. Sharing Images Intelligently: The Astronomical Visualization Metadata Standard

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Christensen, L.; Gauthier, A.

    2006-12-01

    The astronomical education and public outreach (EPO) community plays a key role in conveying the results of scientific research to the general public. A key product of EPO development is a variety of non-scientific public image resources, both derived from scientific observations and created as artistic visualizations of scientific results. This refers to general image formats such as JPEG, TIFF, PNG, GIF, not scientific FITS datasets. Such resources are currently scattered across the internet in a variety of galleries and archives, but are not searchable in any coherent or unified way. Just as Virtual Observatory standards open up all data archives to a common query engine, the EPO community will benefit greatly from a similar mechanism for image search and retrieval. A new standard has been developed for astronomical imagery defining a common set of content fields suited for the needs of astronomical visualizations. This encompasses images derived from data, artist's conceptions, simulations, photography, and can be ultimately extensible to video products. The first generation of tools are now available to tag images with this metadata, which can be embedded with the image file using an XML-based format that functions similarly to a FITS header. As image collections are processed to include astronomy visualization metadata tags, extensive information providing educational context, credits, data sources, and even coordinate information will be readily accessible for uses spanning casual browsing, publication, and interactive media systems.

  4. Networking environmental metadata: a pilot project for the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Bonora, N.; Benito, M.; Abou El-Magd, I.; Mazzetti, P.; Ndong, C.

    2012-04-01

    To better exploit any environmental dataset it is necessary to provide detailed information (metadata) capable to furnish the best data description. Operating environmental data and information networking requires the long-term investment of financial and human resources. As these resources are scarce, ensuring sustainability can be a struggle. Then, to use more effectively human and economic resources and to avoid duplication, it is essential to test existing models and, where appropriate, replicate strategies and experiences. For the above reasons, it has been programmed to pilot a project to implement and test a metadata catalogue's networking, involving Countries afferent the Mediterranean Region, to demonstrate that the adoption of open source and free software and international interoperability standards can contribute to the alignment of I&TC resources to achieve environmental information sharing. This pilot, planned in the frame of the EGIDA FP7 European Project, aims to support the implementation of a replication methodology for the establishment of national/regional environmental information nodes on the bases of the System of Systems architecture concept, to support the exchange of environmental information in the frame of the Barcelona Convention and to incept a Mediterranean scale joint contribution to GEOSS focusing on partnership, infrastructures and products. To establish the partnership and to conduce interoperability tests, this pilot project build on the Info-RAC (Information and Communication Activity Centre of the United Nation Environmental Programme - Mediterranean Action Plan) and GEO (Group on Earth Observations) networks.

  5. Aggregating Metadata from Multiple Archives: a Non-VO Approach

    NASA Astrophysics Data System (ADS)

    Gwyn, S. D. J.

    2015-09-01

    The Solar System Object Image Search (SSOIS) tool at the Canadian Astronomy Data Centre allows users to search for images of moving objects taken with a large number of ground-based and space-based telescopes. (Gwyn et al. 2012). The ever-growing list of telescopes includes HST, Subaru, CFHT, Gemini, SDSS, AAT, NEAT, NOAO, WISE and the ING and ESO telescopes. The first step in constructing SSOIS is to agregate the metadata from the various archives. An effective search requires the RA, Dec, and time of the exposure, and the field of view of the instrument. The archives are extremely hetergeneous; in some cases the interface dates back to the 1990s. After scraping these archives, four lessons have been learned: 1) The more primitive the archive, the easier it is to scrape. 2) Simple Image Access Protocol (SIAP) is not an effective means of scraping archives. 3) When scraping an archive with multiple queries, the queries should be done by time rather by than sky position. 4) Retrieving the metadata is relatively easy, the hard work is in the quality control and understanding each telescope/instrument.

  6. Automatic computed tomography patient dose calculation using DICOM header metadata.

    PubMed

    Jahnen, A; Kohler, S; Hermen, J; Tack, D; Back, C

    2011-09-01

    The present work describes a method that calculates the patient dose values in computed tomography (CT) based on metadata contained in DICOM images in support of patient dose studies. The DICOM metadata is preprocessed to extract necessary calculation parameters. Vendor-specific DICOM header information is harmonized using vendor translation tables and unavailable DICOM tags can be completed with a graphical user interface. CT-Expo, an MS Excel application for calculating the radiation dose, is used to calculate the patient doses. All relevant data and calculation results are stored for further analysis in a relational database. Final results are compiled by utilizing data mining tools. This solution was successfully used for the 2009 CT dose study in Luxembourg. National diagnostic reference levels for standard examinations were calculated based on each of the countries' hospitals. The benefits using this new automatic system saved time as well as resources during the data acquisition and the evaluation when compared with earlier questionnaire-based surveys. PMID:21831868

  7. Metadata in Multiple Dialects and the Rosetta Stone

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Monteleone, K.; Armstrong, E. M.; White, B.

    2012-12-01

    As data are shared across multiple communities and re-used in unexpected ways, it is critical to be able to share metadata about who collected and stewarded the data; where the data are available; how the data were collected and processed; and, how they were used in the past. It is even more critical that the new tools can access this information and present it in ways that new users can understand and, if necessary, integrate into their analyses. Unfortunately, as communities develop and use conventions for these metadata, it becomes more and more difficult to share them across community boundaries. This is true even though these conventions are really dialects of a common documentation language that share many important concepts. Breaking down these barriers requires developing community consensus about these concepts and tools for translating between common representations. Ontologies and connections between them have been used to address this problem across datasets from multiple disciplines. Can these tools help solve similar problems with documentation?

  8. The ISO/IEC 11179 norm for metadata registries: does it cover healthcare standards in empirical research?

    PubMed

    Ngouongo, Sylvie M N; Löbe, Matthias; Stausberg, Jürgen

    2013-04-01

    In order to support empirical medical research concerning reuse and improvement of the expressiveness of study data and hence promote syntactic as well as semantic interoperability, services are required for the maintenance of data element collections. As part of the project for the implementation of a German metadata repository for empirical research we assessed the ability of ISO/IEC 11179 "Information technology - Metadata registries (MDR)" part 3 edition 3 Final Committee Draft "Registry metamodel and basic attributes" to represent healthcare standards. First step of the evaluation was a reformulation of ISO's metamodel with the terms and structures of the different healthcare standards. In a second step, we imported instances of the healthcare standards into a prototypical database implementation representing ISO's metamodel. Whereas the flat structure of disease registries as well as some controlled vocabularies could be easily mapped to the ISO's metamodel, complex structures as used in reference models of electronic health records or classifications could be not exhaustively represented. A logical reconstruction of an application will be needed in order to represent them adequately. Moreover, the correct linkage between elements from ISO/IEC 11179 edition 3 and concepts of classifications remains unclear. We also observed some restrictions of ISO/IEC 11179 edition 3 concerning the representation of items of the Operational Data Model from the Clinical Data Interchange Standards Consortium, which might be outside the scope of a MDR. Thus, despite the obvious strength of the ISO/IEC 11179 edition 3 for metadata registries, some issues should be considered in its further development. PMID:23246614

  9. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    ERIC Educational Resources Information Center

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  10. Characterization of Educational Resources in e-Learning Systems Using an Educational Metadata Profile

    ERIC Educational Resources Information Center

    Solomou, Georgia; Pierrakeas, Christos; Kameas, Achilles

    2015-01-01

    The ability to effectively administrate educational resources in terms of accessibility, reusability and interoperability lies in the adoption of an appropriate metadata schema, able of adequately describing them. A considerable number of different educational metadata schemas can be found in literature, with the IEEE LOM being the most widely…

  11. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    ERIC Educational Resources Information Center

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  12. Manifestations of Metadata: From Alexandria to the Web--Old is New Again

    ERIC Educational Resources Information Center

    Kennedy, Patricia

    2008-01-01

    This paper is a discussion of the use of metadata, in its various manifestations, to access information. Information management standards are discussed. The connection between the ancient world and the modern world is highlighted. Individual perspectives are paramount in fulfilling information seeking. Metadata is interpreted and reflected upon in…

  13. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    ERIC Educational Resources Information Center

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  14. NSIDC Metadata Improvements: Building the foundation for interoperability, discovery, and services

    NASA Astrophysics Data System (ADS)

    Leon, A.; Collins, J. A.; Billingsley, B. W.; Jasiak, E.

    2011-12-01

    The National Snow and Ice Data Center (NSIDC) is actively engaged in efforts to improve metadata acquisition, creation, management, and dissemination. We are replacing a collection of historical databases with an enterprise database (EDB) to manage file and service metadata critical to NSIDC's continued advancement in the areas of data management, stewardship, discovery, analysis and dissemination. Leveraging PostGIS and the ISO 19115 metadata standards, the database will serve as the authoritative, consistent, and extensible representation of NSIDC data holdings. The EDB will support multiple applications, and these applications may present interfaces designed for either human or machine interaction. To serve in this critical role, the content of the EDB must be valid and reliable. Current efforts are focused on developing a user interface to support the input and maintenance of metadata content. Future efforts will include the addition of automated (batch) metadata ingest. Ultimately, the EDB content and services built to interface with the EDB will be leveraged to automate and improve our existing metadata workflows. A solid metadata foundation is critical to the advancement of discovery and services. Building upon a well established standard, like ISO 19115, enables efficient translation to various metadata schemas, support of rich user interfaces, and promotes interoperability with external data services.

  15. Inferring Metadata for a Semantic Web Peer-to-Peer Environment

    ERIC Educational Resources Information Center

    Brase, Jan; Painter, Mark

    2004-01-01

    Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…

  16. Contextual Classification in the Metadata Object Manager (M.O.M.).

    ERIC Educational Resources Information Center

    Pole, Thomas

    1999-01-01

    Defines the contextual classification model, comparing it to the traditional metadata models from which it evolved. Using the MetaData Object Manager (M.O.M) as an example, discusses the use of Contextual Classification in developing this system, and the organizational, performance and reliability advantages of using an external (to the data…

  17. Application of Dublin Core Metadata in the Description of Digital Primary Sources in Elementary School Classrooms.

    ERIC Educational Resources Information Center

    Gilliland-Swetland, Anne J.; Kafai, Yasmin B.; Landis, William E.

    2000-01-01

    Reports on the results of research examining the ability of fourth and fifth grade science and social science students to use Dublin Core metadata elements to describe image resources for inclusion in a digital archive. Describes networked learning environments called Digital Portfolio Archives and discusses metadata for historical primary…

  18. Organizing Scientific Data Sets: Studying Similarities and Differences in Metadata and Subject Term Creation

    ERIC Educational Resources Information Center

    White, Hollie C.

    2012-01-01

    Background: According to Salo (2010), the metadata entered into repositories are "disorganized" and metadata schemes underlying repositories are "arcane". This creates a challenging repository environment in regards to personal information management (PIM) and knowledge organization systems (KOSs). This dissertation research is…

  19. NASA Patent Abstracts bibliography: A continuing bibliography. Section 1: Abstracts (supplement 21) Abstracts

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Abstracts are cited for 87 patents and applications introduced into the NASA scientific and technical information system during the period of January 1982 through June 1982. Each entry consists of a citation, an abstract, and in mose cases, a key illustration selected from the patent or patent application.

  20. SCEC Community Modeling Environment (SCEC/CME) - Data and Metadata Management Issues

    NASA Astrophysics Data System (ADS)

    Minster, J.; Faerman, M.; Ely, G.; Maechling, P.; Gupta, A.; Xin, Q.; Kremenek, G.; Shkoller, B.; Olsen, K.; Day, S.; Moore, R.

    2003-12-01

    One of the goals of the SCEC Community Modeling Environment is to facilitate the execution of substantial collections of large numerical simulations. Since such simulations are resource-intensive, and can generate extremely large outputs, implementing this concept raises a host of data and metadata management challenges. Due to the high computational cost involved in running these simulations, one must balance the cost of repeating such simulations against the burden of archiving the produced datasets making them accessible for future use such as post processing or visualization, without the need of re-computation. Further, a carefully selected collection of such data sets might be used as benchmarks for assessing accuracy and performance of future simulations, developing post-processing software such as visualization tools, and testing data and metadata management strategies. The problem is rapidly compounded if one contemplates the possibility of computing ensemble averages for simulations of complex nonlinear systems. The definition and organization of a complete set of metadata to describe fully any given simulation is a surprisingly complex task, which we approach from the point of view of developing a community digital library, which provides the means to organize the material, as well as standard metadata attributes. Web-based discovery mechanisms are then used to support browsing and retrieval of data. A key component is the selection of appropriate descriptive metadata. We compare existing metadata standards from the digital library community, federal standards, and discipline specific metadata attributes. The digital library community has developed a standard for organizing metadata, called the Metadata Encoding and Transmission Standard (METS). This schema supports descriptive (provenance), administrative (location), structural (component relationships), and behavioral (display and manipulation applications). The organization can be augmented with

  1. A method for automatically abstracting visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1993-01-01

    Visual documents - motion sequences on film, video-tape, and digital recordings - constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training guides. The performance of the abstracting algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.

  2. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  3. Developmental milestones record

    MedlinePlus

    ... foot Rides tricycle well Starts school Understands size concepts Understands time concepts School-age child -- 6 to 12 years Begins ... and recognition is of vital importance Understands abstract concepts ... topics include: Developmental milestones record - 2 months ...

  4. Generation of a Solar Cycle of Sunspot Metadata Using the AIA Event Detection Framework - A Test of the System

    NASA Astrophysics Data System (ADS)

    Slater, G. L.; Zharkov, S.

    2008-12-01

    The soon-to-be-launched Solar Dynamics Observatory (SDO) will generate roughly 2 TB of image data per day, far more than previous solar missions. Because of the difficulty of widely distributing this enormous volume of data and in order to maximize discovery and scientific return, a sophisticated automated metadata extraction system is being developed at Stanford University and Lockheed Martin Solar and Astrophysics Laboratory in Palo Alto, CA. A key component in this system is the Event Detection System, which will supervise the execution of a set of feature and event extraction algorithms running in parallel, in real time, on all images recorded by the four telescopes of the key imaging instrument, the Atmospheric Imaging Assembly (AIA). The system will run on a beowulf cluster of 160 processors. As a test of the new system, we will run feature extraction software developed under the European Grid of Solar Observatories (EGSO) program to extract sunspot metadata from the 12 year SOHO MDI mission archive of full disk continuum and magnetogram images and also from the TRACE high resolution image archive. Although the main goal will be to test the performance of the production line framework, the resulting database will have applications for both research and space weather prediction. We examine some of these applications and compare the databases generated with others currently available.

  5. Metadata behind the Interoperability of Wireless Sensor Networks.

    PubMed

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability.

  6. Metadata behind the Interoperability of Wireless Sensor Networks

    PubMed Central

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability. PMID:22412330

  7. An integrated content and metadata based retrieval system for art.

    PubMed

    Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James

    2004-03-01

    A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.

  8. Arctic Data Explorer: A Rich Solr Powered Metadata Search Portal

    NASA Astrophysics Data System (ADS)

    Liu, M.; Truslove, I.; Yarmey, L.; Lopez, L.; Reed, S. A.; Brandt, M.

    2013-12-01

    The Advanced Cooperative Arctic Data and Information Service (ACADIS) manages data and is the gateway for all relevant Arctic physical, life, and social science data for the Arctic Sciences (ARC) research community. Arctic Data Explorer (ADE), developed by the National Snow and Ice Data Center (NSIDC) under the ACADIS umbrella, is a data portal that provides users the ability to search across multiple Arctic data catalogs rapidly and precisely. In order to help the users quickly find the data they are interested in, we provided a simple search interface -- a search box with spatial and temporal options. The core of the interface is a ';google-like' single search box with logic to handle complex queries behind the scenes. ACADIS collects all metadata through the GI-Cat metadata broker service and indexes it in Solr. The single search box is implemented as a text based search utilizing the powerful tools provided by Solr. In this poster, we briefly explain Solr's indexing and searching capabilities. Several examples are presented to illustrate the rich search functionality the simple search box supports. Then we dive into the implementation details such as how phrase query, wildcard query, range query, fuzzy query and special query search term handling was integrated into ADE search. To provide our users the most relevant answers to their queries as quickly as possible, we worked with the Advisory Committee and the expanding Arctic User Community (scientists and data experts) to collect feedback to improve the search results and adjust the relevance/ranking logic to return more precise search results. The poster has specific examples on how we tuned the relevance ranking to achieve higher quality search results. A feature in the plan is to provide data sets recommendations based on user's current search history. Both collaborative filtering and content-based approaches were considered and researched. A feasible solution is proposed based on the content-based approach.

  9. Provenance Tracking for Earth Science Data and Its Metadata Representation

    NASA Astrophysics Data System (ADS)

    Barkstrom, B. R.

    2007-12-01

    In many cases Earth science data production involves long and complex chains of processes that accept files as input and create new files. It can be demonstrated that these chains form a mathematical graph in which the files and processes are the vertices and the relations between the files and vertices form the edges. There are four types of edges, which can be represented by the relations "this file was produced by that process," "this file is used by those processes," "this process needs these files for input," and "this process produces those files as output." Because Earth science data production often involves using previous data for statistical quality control, provenance graphs can be very large. For example, if previous data are used to develop statistics of clear sky radiances, a particular file may depend on statistics collected on many months of data. For EOS data or for the upcoming NPP and NPOESS missions, the number of files being ingested per day can be in the range of 10,000 to 100,000. As a result, the number of vertices and links can easily be in the millions to hundreds of millions of objects. This suggests that routine inclusion of complete provenance graphs in single files may be prohibitively voluminous, although the increasingly stringent requirements for provenance tracking require maintenance of the information from which the graph can be reliably constructed. The fact that provenance tracking requires traversal of the vertices and edges of the graph makes it difficult to uniquely fit into eXtensible Markup Language (XML). It also makes the construction of the graph difficult to do in standard Structured Query Language (SQL) because the tables for representing the graph require recursive queries. Both of these difficulties require care in constructing the data structures and the software that stores and makes the metadata accessible. This paper will then discuss the representation of this structure in metadata, including the possibilities

  10. The key to enduring access: Cross-complex Metadata collaboration. Revision 1

    SciTech Connect

    Lownsbery, B.; Newton, H.; Ringe, A.

    1996-08-01

    The Nuclear Weapons Information Group (NWIG) is a voluntary collaborative effort of government organizations involved in nuclear weapons research, development, production, and testing. Standardized metadata is seen as critical to the locating, accessing, and effective use of the data, information, and knowledge of both past and future weapons activities. This paper will describe the activities of the NWIG Metadata Working Group in developing the metadata elements and authorities which will be used to share information about data stored in computers and vaults across the complex. With the current lack of secure network connectivity, it is impossible to have distributed access. Therefore we have focused on standardizing the form and content of shared metadata. We have adopted a SGML-based neutral exchange form that is completely independent of how the metadata is created and how it will be used. Our efforts have included the definition of a set of metadata elements that can be applied to all data types and additional attributes specific to each data type, such as documents, drawings, radiographs, photos, movies, etc. We have developed a common subject categorization taxonomy and identified several subsets of a standard glossary and thesaurus for inclusion in the metadata to provide consistency of terminology and the capability to link back to the full thesaurus. 2 refs., 2 figs., 2 tabs.

  11. Foundations of a metadata repository for databases of registers and trials.

    PubMed

    Stausberg, Jürgen; Löbe, Matthias; Verplancke, Philippe; Drepper, Johannes; Herre, Heinrich; Löffler, Markus

    2009-01-01

    The planning of case report forms (CRFs) in clinical trials or databases in registers is mostly an informal process starting from scratch involving domain experts, biometricians, and documentation specialists. The Telematikplattform für Medizinische Forschungsnetze, an umbrella organization for medical research in Germany, aims at supporting and improving this process with a metadata repository, covering the variables and value lists used in databases of registers and trials. The use cases for the metadata repository range from a specification of case report forms to the harmonization of variable collections, variables, and value lists through a formal review. The warehouse used for the storage of the metadata should at least fulfill the definition of part 3 "Registry metamodel and basic attributes" of ISO/IEC 11179 Information technology - Metadata registries. An implementation of the metadata repository should offer an import and export of metadata in the Operational Data Model standard of the Clinical Data Interchange Standards Consortium. It will facilitate the creation of CRFs and data models, improve the quality of CRFs and data models, support the harmonization of variables and value lists, and support the mapping of metadata and data. PMID:19745342

  12. Modelling Metamorphism by Abstract Interpretation

    NASA Astrophysics Data System (ADS)

    Dalla Preda, Mila; Giacobazzi, Roberto; Debray, Saumya; Coogan, Kevin; Townsend, Gregg M.

    Metamorphic malware apply semantics-preserving transformations to their own code in order to foil detection systems based on signature matching. In this paper we consider the problem of automatically extract metamorphic signatures from these malware. We introduce a semantics for self-modifying code, later called phase semantics, and prove its correctness by showing that it is an abstract interpretation of the standard trace semantics. Phase semantics precisely models the metamorphic code behavior by providing a set of traces of programs which correspond to the possible evolutions of the metamorphic code during execution. We show that metamorphic signatures can be automatically extracted by abstract interpretation of the phase semantics, and that regular metamorphism can be modelled as finite state automata abstraction of the phase semantics.

  13. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies. PMID:20844917

  14. Coordinated Earth Science Data Replication Enabled Through ISO Metadata, Version Control, and Web Services

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Gollberg, G.; Sheneman, L.; Dascalu, S.

    2011-12-01

    The richness and flexibility of the ISO 19115 metadata standard for documenting Earth Science data has the potential to provide support for numeroius applications beyond the traditional discovery and use scenarios commonly associated with metadata. The Tri-State (Nevada, New Mexico, Idaho) NSF EPSCoR project is pursuing such an alternative application of the ISO Metadata content model - one in which targeted data replication between individual data repositories in the three states is enabled through a specifically defined collection and granule metadata content model. The developed metadata model includes specific ISO 19115 elements that enable: - "flagging" of specific collections or granules for replication - documenting lineage (the relationship between "authoritative" source data and data replicas) - verification of data fidelity through standard cryptographic methods - extension of collection and granual metadata to reflect additonal data download and services provided by distributed data replicas While the mechanics of the replication model within each state are dependent upon the specific systems, software, and storage capabilities within the individual repositories, the adoption of a common XML metadata model (ISO 19139) and the use of a broadly supported version control system (Subversion) as the core storage system for the shared metadata provides a long-term platform upon which each state in the consortium can build. This paper presents the preliminary results of the implementation of the system across all three states, and will include a discussion of the specific ISO 19115 elements that contribute to the system, experience in using Subversion as a metadata versioning system, and lessons learned in the development of this loosely-coupled data replication system.

  15. Abstraction and natural language semantics.

    PubMed Central

    Kayser, Daniel

    2003-01-01

    According to the traditional view, a word prototypically denotes a class of objects sharing similar features, i.e. it results from an abstraction based on the detection of common properties in perceived entities. I explore here another idea: words result from abstraction of common premises in the rules governing our actions. I first argue that taking 'inference', instead of 'reference', as the basic issue in semantics does matter. I then discuss two phenomena that are, in my opinion, particularly difficult to analyse within the scope of traditional semantic theories: systematic polysemy and plurals. I conclude by a discussion of my approach, and by a summary of its main features. PMID:12903662

  16. Abstract communication for coordinated planning

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Durfee, Edmund H.

    2003-01-01

    work offers evidence that distributed planning agents can greatly reduce communication costs by reasoning at abstract levels. While it is intuitive that improved search can reduce communication in such cases, there are other decisions about how to communicate plan information that greatly affect communication costs. This paper identifies cases independent of search where communicating at multiple levels of abstraction can exponentially decrease costs and where it can exponentially add costs. We conclude with a process for determining appropriate levels of communication based on characteristics of the domain.

  17. The ANSS Station Information System: A Centralized Station Metadata Repository for Populating, Managing and Distributing Seismic Station Metadata

    NASA Astrophysics Data System (ADS)

    Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.

    2015-12-01

    Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.

  18. Metadata registry and management system based on ISO 11179 for cancer clinical trials information system

    PubMed Central

    Park, Yu Rang; Kim*, Ju Han

    2006-01-01

    Standardized management of data elements (DEs) for Case Report Form (CRF) is crucial in Clinical Trials Information System (CTIS). Traditional CTISs utilize organization-specific definitions and storage methods for Des and CRFs. We developed metadata-based DE management system for clinical trials, Clinical and Histopathological Metadata Registry (CHMR), using international standard for metadata registry (ISO 11179) for the management of cancer clinical trials information. CHMR was evaluated in cancer clinical trials with 1625 DEs extracted from the College of American Pathologists Cancer Protocols for 20 major cancers. PMID:17238675

  19. EXTRACT: Interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    DOE PAGES

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, wellmore » documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Here the comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed.« less

  20. Using grid-enabled distributed metadata database to index DICOM-SR.

    PubMed

    Blanquer, Ignacio; Hernandez, Vicente; Salavert, José; Segrelles, Damià

    2009-01-01

    Integrating medical data at inter-centre level implies many challenges that are being tackled from many disciplines and technologies. Medical informatics have applied an important effort on describing and standardizing Electronic Health Records, and specially standardisation has achieved an important extent on Medical Imaging. Grid technologies have been extensively used to deal with multi-domain authorisation issues and to provide single access points for accessing DICOM Medical Images, enabling the access and processing to large repositories of data. However, this approach introduces the challenge of efficiently organising data according to their relevance and interest, in which the medical report is a key factor. The present work shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources. This approach follows the tree-like structure of DICOM-SR reports in a self-organising metadata catalogue based on AMGA. This approach enables federating different but compatible distributed repositories, automatically reconfiguring the database structure, and preserving the autonomy of each centre in defining the template. Tools developed so far and some performance results are provided to prove the effectiveness of the approach.

  1. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation

    PubMed Central

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15–25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/ PMID:26896844

  2. Using Information Visualization to Support Access to Archival Records

    ERIC Educational Resources Information Center

    Allen, Robert B.

    2005-01-01

    As more archival metadata and archival records become available online, providing effective interfaces to those materials is increasingly important to give users access. This article describes five approaches from the hypertext and visualization research communities which can be used to improve such access: (1) navigating hierarchical structures,…

  3. Handedness Shapes Children's Abstract Concepts

    ERIC Educational Resources Information Center

    Casasanto, Daniel; Henetz, Tania

    2012-01-01

    Can children's handedness influence how they represent abstract concepts like "kindness" and "intelligence"? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on…

  4. Rolloff Roof Observatory Construction (Abstract)

    NASA Astrophysics Data System (ADS)

    Ulowetz, J. H.

    2015-12-01

    (Abstract only) Lessons learned about building an observatory by someone with limited construction experience, and the advantages of having one for imaging and variable star studies. Sample results shown of composite light curves for cataclysmic variables UX UMa and V1101 Aql with data from my observatory combined with data from others around the world.

  5. Innovation Abstracts, Volume XX, 1998.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1998-01-01

    The 52 abstracts in these 29 serial issues describe innovative approaches to teaching and learning in the community college. Sample topics include reading motivation, barriers to academic success, the learning environment, writing skills, leadership in the criminal justice profession, role-playing strategies, cooperative education, distance…

  6. ERGONOMICS ABSTRACTS 48347-48982.

    ERIC Educational Resources Information Center

    Ministry of Technology, London (England). Warren Spring Lab.

    IN THIS COLLECTION OF ERGONOMICS ABSTRACTS AND ANNOTATIONS THE FOLLOWING AREAS OF CONCERN ARE REPRESENTED--GENERAL REFERENCES, METHODS, FACILITIES, AND EQUIPMENT RELATING TO ERGONOMICS, SYSTEMS OF MAN AND MACHINES, VISUAL, AUDITORY, AND OTHER SENSORY INPUTS AND PROCESSES (INCLUDING SPEECH AND INTELLIGIBILITY), INPUT CHANNELS, BODY MEASUREMENTS,…

  7. Does "Social Work Abstracts" Work?

    ERIC Educational Resources Information Center

    Holden, Gary; Barker, Kathleen; Covert-Vail, Lucinda; Rosenberg, Gary; Cohen, Stephanie A.

    2008-01-01

    Objective: The current study seeks to provide estimates of the adequacy of journal coverage in the Social Work Abstracts (SWA) database. Method: A total of 23 journals listed in the Journal Citation Reports social work category during the 1997 to 2005 period were selected for study. Issue-level coverage estimates were obtained for SWA and…

  8. Abstract Expressionism. Clip and Save.

    ERIC Educational Resources Information Center

    Hubbard, Guy

    2002-01-01

    Provides information on the art movement, Abstract Expressionism, and includes learning activities. Focuses on the artist Jackson Pollock, offering a reproduction of his artwork, "Convergence: Number 10." Includes background information on the life and career of Pollock and a description of the included artwork. (CMK)

  9. Conference Abstracts: Microcomputers in Education.

    ERIC Educational Resources Information Center

    Baird, William E.

    1985-01-01

    Provides abstracts of five papers presented at the Fourth Annual Microcomputers in Education Conference. Papers considered microcomputers in science laboratories, Apple II Plus/e computer-assisted instruction in chemistry, computer solutions for space mechanics concerns, computer applications to problem solving and hypothesis testing, and…

  10. Metaphoric Images from Abstract Concepts.

    ERIC Educational Resources Information Center

    Vizmuller-Zocco, Jana

    1992-01-01

    Discusses children's use of metaphors to create meaning, using as an example the pragmatic and "scientific" ways in which preschool children explain thunder and lightning to themselves. Argues that children are being shortchanged by modern scientific notions of abstractness and that they should be encouraged to create their own explanations of…

  11. What Is It? Elementary Abstraction

    ERIC Educational Resources Information Center

    Von Sossan, Joanne

    2010-01-01

    Abstraction can be hard for older students to understand, and it usually involves simplifying or rearranging natural objects to meet the needs of the artist, whether it be for organization or expression. But, in reality, that is what young artists do when they draw from life. They do not have enough experience--and sometimes the patience--to see…

  12. Linked Metadata - lightweight semantics for data integration (Invited)

    NASA Astrophysics Data System (ADS)

    Hendler, J. A.

    2013-12-01

    The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the

  13. Preserving Geological Samples and Metadata from Polar Regions

    NASA Astrophysics Data System (ADS)

    Grunow, A.; Sjunneskog, C. M.

    2011-12-01

    The Office of Polar Programs at the National Science Foundation (NSF-OPP) has long recognized the value of preserving earth science collections due to the inherent logistical challenges and financial costs of collecting geological samples from Polar Regions. NSF-OPP established two national facilities to make Antarctic geological samples and drill cores openly and freely available for research. The Antarctic Marine Geology Research Facility (AMGRF) at Florida State University was established in 1963 and archives Antarctic marine sediment cores, dredge samples and smear slides along with ship logs. The United States Polar Rock Repository (USPRR) at Ohio State University was established in 2003 and archives polar rock samples, marine dredges, unconsolidated materials and terrestrial cores, along with associated materials such as field notes, maps, raw analytical data, paleomagnetic cores, thin sections, microfossil mounts, microslides and residues. The existence of the AMGRF and USPRR helps to minimize redundant sample collecting, lessen the environmental impact of doing polar field work, facilitates field logistics planning and complies with the data sharing requirement of the Antarctic Treaty. USPRR acquires collections through donations from institutions and scientists and then makes these samples available as no-cost loans for research, education and museum exhibits. The AMGRF acquires sediment cores from US based and international collaboration drilling projects in Antarctica. Destructive research techniques are allowed on the loaned samples and loan requests are accepted from any accredited scientific institution in the world. Currently, the USPRR has more than 22,000 cataloged rock samples available to scientists from around the world. All cataloged samples are relabeled with a USPRR number, weighed, photographed and measured for magnetic susceptibility. Many aspects of the sample metadata are included in the database, e.g. geographical location, sample

  14. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  15. Object Classification via Planar Abstraction

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2016-06-01

    We present a supervised machine learning approach for classification of objects from sampled point data. The main idea consists in first abstracting the input object into planar parts at several scales, then discriminate between the different classes of objects solely through features derived from these planar shapes. Abstracting into planar shapes provides a means to both reduce the computational complexity and improve robustness to defects inherent to the acquisition process. Measuring statistical properties and relationships between planar shapes offers invariance to scale and orientation. A random forest is then used for solving the multiclass classification problem. We demonstrate the potential of our approach on a set of indoor objects from the Princeton shape benchmark and on objects acquired from indoor scenes and compare the performance of our method with other point-based shape descriptors.

  16. Abstraction of Seepage into Drifts

    SciTech Connect

    WILSON,MICHAEL L.; HO,CLIFFORD K.

    2000-10-16

    The abstraction model used for seepage into emplacement drifts in recent TSPA simulations has been presented. This model contributes to the calculation of the quantity of water that might contact waste if it is emplaced at Yucca Mountain. Other important components of that calculation not discussed here include models for climate, infiltration, unsaturated-zone flow, and thermohydrology; drip-shield and waste-package degradation; and flow around and through the drip shield and waste package. The seepage abstraction model is stochastic because predictions of seepage are necessarily quite uncertain. The model provides uncertainty distributions for seepage fraction fraction of waste-package locations flow rate as functions of percolation flux. In addition, effects of intermediate-scale flow with seepage and seep channeling are included by means of a flow-focusing factor, which is also represented by an uncertainty distribution.

  17. An Abstract Plan Preparation Language

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Cesar A.

    2006-01-01

    This paper presents a new planning language that is more abstract than most existing planning languages such as the Planning Domain Definition Language (PDDL) or the New Domain Description Language (NDDL). The goal of this language is to simplify the formal analysis and specification of planning problems that are intended for safety-critical applications such as power management or automated rendezvous in future manned spacecraft. The new language has been named the Abstract Plan Preparation Language (APPL). A translator from APPL to NDDL has been developed in support of the Spacecraft Autonomy for Vehicles and Habitats Project (SAVH) sponsored by the Explorations Technology Development Program, which is seeking to mature autonomy technology for application to the new Crew Exploration Vehicle (CEV) that will replace the Space Shuttle.

  18. Cryogenic foam insulation: Abstracted publications

    NASA Technical Reports Server (NTRS)

    Williamson, F. R.

    1977-01-01

    A group of documents were chosen and abstracted which contain information on the properties of foam materials and on the use of foams as thermal insulation at cryogenic temperatures. The properties include thermal properties, mechanical properties, and compatibility properties with oxygen and other cryogenic fluids. Uses of foams include applications as thermal insulation for spacecraft propellant tanks, and for liquefied natural gas storage tanks and pipelines.

  19. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  20. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    NASA Astrophysics Data System (ADS)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  1. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  2. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  3. Establishment of Kawasaki disease database based on metadata standard

    PubMed Central

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-01-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients’ clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients’ blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported. Database URL: http://www.kawasakidisease.kr PMID:27630202

  4. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  5. Establishment of Kawasaki disease database based on metadata standard

    PubMed Central

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-01-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients’ clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients’ blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported. Database URL: http://www.kawasakidisease.kr

  6. Updated population metadata for United States historical climatology network stations

    USGS Publications Warehouse

    Owen, T.W.; Gallo, K.P.

    2000-01-01

    The United States Historical Climatology Network (HCN) serial temperature dataset is comprised of 1221 high-quality, long-term climate observing stations. The HCN dataset is available in several versions, one of which includes population-based temperature modifications to adjust urban temperatures for the "heat-island" effect. Unfortunately, the decennial population metadata file is not complete as missing values are present for 17.6% of the 12 210 population values associated with the 1221 individual stations during the 1900-90 interval. Retrospective grid-based populations. Within a fixed distance of an HCN station, were estimated through the use of a gridded population density dataset and historically available U.S. Census county data. The grid-based populations for the HCN stations provide values derived from a consistent methodology compared to the current HCN populations that can vary as definitions of the area associated with a city change over time. The use of grid-based populations may minimally be appropriate to augment populations for HCN climate stations that lack any population data, and are recommended when consistent and complete population data are required. The recommended urban temperature adjustments based on the HCN and grid-based methods of estimating station population can be significantly different for individual stations within the HCN dataset.

  7. Establishment of Kawasaki disease database based on metadata standard.

    PubMed

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-07-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. PMID:27630202

  8. Augmenting Traditional Static Analysis With Commonly Available Metadata

    SciTech Connect

    Cook, Devin

    2015-05-10

    Developers and security analysts have been using static analysis for a long time to analyze programs for defects and vulnerabilities with some success. Generally a static analysis tool is run on the source code for a given program, flagging areas of code that need to be further inspected by a human analyst. These areas may be obvious bugs like potential bu er over flows, information leakage flaws, or the use of uninitialized variables. These tools tend to work fairly well - every year they find many important bugs. These tools are more impressive considering the fact that they only examine the source code, which may be very complex. Now consider the amount of data available that these tools do not analyze. There are many pieces of information that would prove invaluable for finding bugs in code, things such as a history of bug reports, a history of all changes to the code, information about committers, etc. By leveraging all this additional data, it is possible to nd more bugs with less user interaction, as well as track useful metrics such as number and type of defects injected by committer. This dissertation provides a method for leveraging development metadata to find bugs that would otherwise be difficult to find using standard static analysis tools. We showcase two case studies that demonstrate the ability to find 0day vulnerabilities in large and small software projects by finding new vulnerabilities in the cpython and Roundup open source projects.

  9. New Features in the ADS Abstract Service

    NASA Astrophysics Data System (ADS)

    Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Kurtz, M. J.; ReyBacaicoa, V.; Murray, S. S.

    2001-11-01

    The ADS Abstract Service contains over 2.3 million references in four databases: Astronomy/Astrophysics/Planetary Sciences, Instrumentation, Physics/Geophysics, and Preprints. We provide abstracts and articles free to the astronomical community for all major and many smaller astronomy journals, PhD theses, conference proceedings, and technical reports. These four databases can be queried either separately of jointly. The ADS also has scanned 1.3 million pages in 180,000 articles in the ADS Article Service. This literature archive contains all major Astronomy journals and many smaller journals, as well as conference proceedings, including the abstract books from all the LPSCs back to volume 2. A new feature gives our users the ability to see list of articles that were also read by the readers of a given article. This is a powerful tool to find out what current articles are relevant in a particular field of study. We have recently expanded the citation and reference query capabilities. It allows our users to select papers for which they want to see references or citations and then retrieve these citations/references. Another new capability is the ability to sort a list of articles by their citation count. As usual, users should be reminded that the citations in ADS are incomplete because we do not obtain reference lists from all publishers. In addition, we cannot match all references (e.g. in press, private communications, author errors, some conference papers, etc.). Anyone using the citations for analysis of publishing records should keep this in mind. More work on expanding the citation and reference features is planned over the next year. ADS Home Page http://ads.harvard.edu/

  10. Operating System Abstraction Layer (OSAL)

    NASA Technical Reports Server (NTRS)

    Yanchik, Nicholas J.

    2007-01-01

    This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.

  11. Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics

    PubMed Central

    Good, Benjamin M; Tennis, Joseph T; Wilkinson, Mark D

    2009-01-01

    Background Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags) and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. Results We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags) per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. Conclusion The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search) to benefit substantially from

  12. Effective use of metadata in the integration and analysis of multi-dimensional optical data

    NASA Astrophysics Data System (ADS)

    Pastorello, G. Z.; Gamon, J. A.

    2012-12-01

    Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The

  13. Re-using the DataCite Metadata Store as DOI registration proxy and IGSN registry

    NASA Astrophysics Data System (ADS)

    Klump, J.; Ulbricht, D.

    2012-12-01

    Currently a lot of work is done to stimulate the reuse of data. In joint efforts research institutions establish infrastructure to facilitate the publication of scientific datasets. To create a citable reference, these datasets must be tagged with persistent identifiers (DOIs) and described with metadata. As most data in the geosciences are derived from samples, it is crucial to be able to uniquely identify the samples from which a set of data were derived. Incomplete documentation of samples in publications, use of ambiguous sample names are major obstacles for synthesis studies and re-use of data. Access to samples for re-analysis and re-appraisal is limited due to the lack of a central catalogue that allows finding a sample's archiving location. The International Geo Sample Number (IGSN) [1] provides solutions to the questions of unique sample identification and discovery. Use of the IGSN in digital data systems allows building linkages between the digital representation of samples in sample registries, e.g. SESAR [2], and their related data in the literature and in web accessible digital data repositories. DataCite recently decided to publish their metadata store (DataCite MDS) and accompanying software online [3]. The DataCite software allows registration of handles, deposition of metadata in an XML format, it offers a search interface, and is able to disseminate metadata via OAI-PMH. Its, REST interface allows an easy integration into institutional data work flows. For our applications at GFZ Potsdam we modified the DataCite MDS software for reuse it in two different contexts: as the DOIDB web service for data publications and as the IGSN registry web service for the registration of geological samples. The DOIDB acts as a proxy service to the DataCite Metadata Store and uses its REST-Interface for registration of DataCite DOI and associated DOI metadata. Metadata can be deposited in the DataCite or NASA DIF schema. Both schemata can be disseminated via OAI

  14. Abstraction of Seepage into Drifts

    SciTech Connect

    M.L. Wilson; C.K. Ho

    2000-09-26

    A total-system performance assessment (TSPA) for a potential nuclear-waste repository requires an estimate of the amount of water that might contact waste. This paper describes the model used for part of that estimation in a recent TSPA for the Yucca Mountain site. The discussion is limited to estimation of how much water might enter emplacement drifts; additional considerations related to flow within the drifts, and how much water might actually contact waste, are not addressed here. The unsaturated zone at Yucca Mountain is being considered for the potential repository, and a drift opening in unsaturated rock tends to act as a capillary barrier and divert much of the percolating water around it. For TSPA, the important questions regarding seepage are how many waste packages might be subjected to water flow and how much flow those packages might see. Because of heterogeneity of the rock and uncertainty about the future (how the climate will evolve, etc.), it is not possible to predict seepage amounts or locations with certainty. Thus, seepage is treated as a stochastic quantity in TSPA simulations, with the magnitude and spatial distribution of seepage sampled from uncertainty distributions. The distillation of the essential components of process modeling into a form suitable for use in TSPA simulations is referred to as abstraction. In the following sections, seepage process models and abstractions will be summarized and then some illustrative results are presented.

  15. SPASE: Metadata Interoperability in the Great Observatory Environment

    NASA Astrophysics Data System (ADS)

    Thieman, J. R.; King, T. A.; Roberts, D. A.; King, J. H.

    2006-05-01

    SPASE is the Space Physics Archive Search and Extract project. This project is funded by NASA to provide a data model for the Great Observatory data environment that can be used as a common basis for locating and retrieving data of interest for the space and solar physics community. Common terminology that maps to much of the disparate metadata being used by these systems enables unified searches across the archives and ready comparison of the results to determine time overlaps, data commonalities, applicability for research purposes, etc. The SPASE Data Model Version 1.0 has been created and will be described. The model now needs to be tested by the community by describing a wide variety of data holdings and providing feedback for further improvement of the model. The latest version of the Data Model can be obtained by clicking on the Link marked "Current Draft" at the following web site: http://www.igpp.ucla.edu/spase/. Hundreds of data descriptions created with this model in mind have been entered into the Virtual Space Physics Observatory search engine called the "Space and Solar Physics Product Finder". We recommend exercising this system to find data of interest and to see the value of a common data model for search and retrieval. The VSPO Product Finder is available at the following URL: http://vspo.gsfc.nasa.gov/websearch/dispatcher. Future enhancements of the SPASE Data Model will include more depth of description to more fully describe the science content of a data set as well as automated tools to ease the task of describing data according to the Data Model. As always, the value of the SPASE effort will depend on the use and feedback by the space physics community.

  16. Solutions for extracting file level spatial metadata from airborne mission data

    NASA Astrophysics Data System (ADS)

    Schwab, M. J.; Stanley, M.; Pals, J.; Brodzik, M.; Fowler, C.; Icebridge Engineering/Spatial Metadata

    2011-12-01

    Authors: Michael Stanley Mark Schwab Jon Pals Mary J. Brodzik Cathy Fowler Collaboration: Raytheon EED and NSIDC Raytheon / EED 5700 Rivertech Court Riverdale, MD 20737 NSIDC University of Colorado UCB 449 Boulder, CO 80309-0449 Data sets acquired from satellites and aircraft may differ in many ways. We will focus on the differences in spatial coverage between the two platforms. Satellite data sets over a given period typically cover large geographic regions. These data are collected in a consistent, predictable and well understood manner due to the uniformity of satellite orbits. Since satellite data collection paths are typically smooth and uniform the data from satellite instruments can usually be described with simple spatial metadata. Subsequently, these spatial metadata can be stored and searched easily and efficiently. Conversely, aircraft have significantly more freedom to change paths, circle, overlap, and vary altitude all of which add complexity to the spatial metadata. Aircraft are also subject to wind and other elements that result in even more complicated and unpredictable spatial coverage areas. This unpredictability and complexity makes it more difficult to extract usable spatial metadata from data sets collected on aircraft missions. It is not feasible to use all of the location data from aircraft mission data sets for use as spatial metadata. The number of data points in typical data sets poses serious performance problems for spatial searching. In order to provide efficient spatial searching of the large number of files cataloged in our systems, we need to extract approximate spatial descriptions as geo-polygons from a small number of vertices (fewer than two hundred). We present some of the challenges and solutions for creating airborne mission-derived spatial metadata. We are implementing these methods to create the spatial metadata for insertion of IceBridge mission data into ECS for public access through NSIDC and ECHO but, they are

  17. Sensor metadata blueprints and computer-aided editing for disciplined SensorML

    NASA Astrophysics Data System (ADS)

    Tagliolato, Paolo; Oggioni, Alessandro; Fugazza, Cristiano; Pepe, Monica; Carrara, Paola

    2016-04-01

    The need for continuous, accurate, and comprehensive environmental knowledge has led to an increase in sensor observation systems and networks. The Sensor Web Enablement (SWE) initiative has been promoted by the Open Geospatial Consortium (OGC) to foster interoperability among sensor systems. The provision of metadata according to the prescribed SensorML schema is a key component for achieving this and nevertheless availability of correct and exhaustive metadata cannot be taken for granted. On the one hand, it is awkward for users to provide sensor metadata because of the lack in user-oriented, dedicated tools. On the other, the specification of invariant information for a given sensor category or model (e.g., observed properties and units of measurement, manufacturer information, etc.), can be labor- and timeconsuming. Moreover, the provision of these details is error prone and subjective, i.e., may differ greatly across distinct descriptions for the same system. We provide a user-friendly, template-driven metadata authoring tool composed of a backend web service and an HTML5/javascript client. This results in a form-based user interface that conceals the high complexity of the underlying format. This tool also allows for plugging in external data sources providing authoritative definitions for the aforementioned invariant information. Leveraging these functionalities, we compiled a set of SensorML profiles, that is, sensor metadata blueprints allowing end users to focus only on the metadata items that are related to their specific deployment. The natural extension of this scenario is the involvement of end users and sensor manufacturers in the crowd-sourced evolution of this collection of prototypes. We describe the components and workflow of our framework for computer-aided management of sensor metadata.

  18. Experience with abstract notation one

    NASA Technical Reports Server (NTRS)

    Harvey, James D.; Weaver, Alfred C.

    1990-01-01

    The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.

  19. Abstraction Planning in Real Time

    NASA Technical Reports Server (NTRS)

    Washington, R.

    1994-01-01

    When a planning agent works in a complex, real-world domain, it is unable to plan for and store all possible contingencies and problem situations ahead of time. This thesis presents a method for planning a run time that incrementally builds up plans at multiple levels of abstraction. The plans are continually updated by information from the world, allowing the planner to adjust its plan to a changing world during the planning process. All the information is represented over intervals of time, allowing the planner to reason about durations, deadlines, and delays within its plan. In addition to the method, the thesis presents a formal model of the planning process and uses the model to investigate planning strategies.

  20. Abstraction Planning in Real Time

    NASA Technical Reports Server (NTRS)

    Washington, Richard

    1994-01-01

    When a planning agent works in a complex, real-world domain, it is unable to plan for and store all possible contingencies and problem situations ahead of time. The agent needs to be able to fall back on an ability to construct plans at run time under time constraints. This thesis presents a method for planning at run time that incrementally builds up plans at multiple levels of abstraction. The plans are continually updated by information from the world, allowing the planner to adjust its plan to a changing world during the planning process. All the information is represented over intervals of time, allowing the planner to reason about durations, deadlines, and delays within its plan. In addition to the method, the thesis presents a formal model of the planning process and uses the model to investigate planning strategies. The method has been implemented, and experiments have been run to validate the overall approach and the theoretical model.

  1. Toward Millimagnitude Photometric Calibration (Abstract)

    NASA Astrophysics Data System (ADS)

    Dose, E.

    2014-12-01

    (Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.

  2. Geospatial data infrastructure: The development of metadata for geo-information in China

    NASA Astrophysics Data System (ADS)

    Xu, Baiquan; Yan, Shiqiang; Wang, Qianju; Lian, Jian; Wu, Xiaoping; Ding, Keyong

    2014-03-01

    Stores of geoscience records are in constant flux. These stores are continually added to by new information, ideas and data, which are frequently revised. The geoscience record is in restrained by human thought and technology for handling information. Conventional methods strive, with limited success, to maintain geoscience records which are readily susceptible and renewable. The information system must adapt to the diversity of ideas and data in geoscience and their changes through time. In China, more than 400,000 types of important geological data are collected and produced in geological work during the last two decades, including oil, natural gas and marine data, mine exploration, geophysical, geochemical, remote sensing and important local geological survey and research reports. Numerous geospatial databases are formed and stored in National Geological Archives (NGA) with available formats of MapGIS, ArcGIS, ArcINFO, Metalfile, Raster, SQL Server, Access and JPEG. But there is no effective way to warrant that the quality of information is adequate in theory and practice for decision making. The need for fast, reliable, accurate and up-to-date information by providing the Geographic Information System (GIS) communities are becoming insistent for all geoinformation producers and users in China. Since 2010, a series of geoinformation projects have been carried out under the leadership of the Ministry of Land and Resources (MLR), including (1) Integration, update and maintenance of geoinformation databases; (2) Standards research on clusterization and industrialization of information services; (3) Platform construction of geological data sharing; (4) Construction of key borehole databases; (5) Product development of information services. "Nine-System" of the basic framework has been proposed for the development and improvement of the geospatial data infrastructure, which are focused on the construction of the cluster organization, cluster service, convergence

  3. Strategies for writing a competitive research abstract.

    PubMed

    Lindquist, R A

    1993-01-01

    This article focuses on the process of preparing research abstracts for submission to scientific meetings of professional organizations. Perspectives on the process of specifying an abstract's focus, choosing a scientific meeting, selecting the type of presentation, developing an abstract, and writing an abstract in its form are presented.

  4. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing. PMID:23920917

  5. Metadata Repository for Improved Data Sharing and Reuse Based on HL7 FHIR.

    PubMed

    Ulrich, Hannes; Kock, Ann-Kristin; Duhm-Harbeck, Petra; Habermann, Jens K; Ingenerf, Josef

    2016-01-01

    Unreconciled data structures and formats are a common obstacle to the urgently required sharing and reuse of data within healthcare and medical research. Within the North German Tumor Bank of Colorectal Cancer, clinical and sample data, based on a harmonized data set, is collected and can be pooled by using a hospital-integrated Research Data Management System supporting biobank and study management. Adding further partners who are not using the core data set requires manual adaptations and mapping of data elements. Facing this manual intervention and focusing the reuse of heterogeneous healthcare instance data (value level) and data elements (metadata level), a metadata repository has been developed. The metadata repository is an ISO 11179-3 conformant server application built for annotating and mediating data elements. The implemented architecture includes the translation of metadata information about data elements into the FHIR standard using the FHIR Data Element resource with the ISO 11179 Data Element Extensions. The FHIR-based processing allows exchange of data elements with clinical and research IT systems as well as with other metadata systems. With increasingly annotated and harmonized data elements, data quality and integration can be improved for successfully enabling data analytics and decision support. PMID:27577363

  6. Symmetric Active/Active Metadata Service for High Availability Parallel File Systems

    SciTech Connect

    He, X.; Ou, Li; Engelmann, Christian; Chen, Xin; Scott, Stephen L

    2009-01-01

    High availability data storage systems are critical for many applications as research and business become more data-driven. Since metadata management is essential to system availability, multiple metadata services are used to improve the availability of distributed storage systems. Past research focused on the active/standby model, where each active service has at least one redundant idle backup. However, interruption of service and even some loss of service state may occur during a fail-over depending on the used replication technique. In addition, the replication overhead for multiple metadata services can be very high. The research in this paper targets the symmetric active/active replication model, which uses multiple redundant service nodes running in virtual synchrony. In this model, service node failures do not cause a fail-over to a backup and there is no disruption of service or loss of service state. We further discuss a fast delivery protocol to reduce the latency of the needed total order broadcast. Our prototype implementation shows that metadata service high availability can be achieved with an acceptable performance trade-off using our symmetric active/active metadata service solution.

  7. Metadata Repository for Improved Data Sharing and Reuse Based on HL7 FHIR.

    PubMed

    Ulrich, Hannes; Kock, Ann-Kristin; Duhm-Harbeck, Petra; Habermann, Jens K; Ingenerf, Josef

    2016-01-01

    Unreconciled data structures and formats are a common obstacle to the urgently required sharing and reuse of data within healthcare and medical research. Within the North German Tumor Bank of Colorectal Cancer, clinical and sample data, based on a harmonized data set, is collected and can be pooled by using a hospital-integrated Research Data Management System supporting biobank and study management. Adding further partners who are not using the core data set requires manual adaptations and mapping of data elements. Facing this manual intervention and focusing the reuse of heterogeneous healthcare instance data (value level) and data elements (metadata level), a metadata repository has been developed. The metadata repository is an ISO 11179-3 conformant server application built for annotating and mediating data elements. The implemented architecture includes the translation of metadata information about data elements into the FHIR standard using the FHIR Data Element resource with the ISO 11179 Data Element Extensions. The FHIR-based processing allows exchange of data elements with clinical and research IT systems as well as with other metadata systems. With increasingly annotated and harmonized data elements, data quality and integration can be improved for successfully enabling data analytics and decision support.

  8. A Critical Review of Sentinel-3 Metadata for Scientific and Operational Applications

    NASA Astrophysics Data System (ADS)

    Pons Fernandez, Xavier; Zabala Torres, Alaitz; Domingo Marimon, Cristina

    2015-12-01

    Sentinel-3 is a mission designed for Copernicus/GMES to ensure long term collection of data of uniform quality, generated and delivered in an operational manner for several sea and land applications. This paper considers and makes a critical review of the data and metadata which will be distributed as Sentinel-3 OLCI, SLSTR and SYN products, evaluating this information according to the specifications, guidelines and characteristics described by the International Organization of Standardization, ISO. The paper reviews the data and metadata currently included on the Test Data Set, provided by ESA and points out recommendations both to increase metadata usability and to avoid metadata misunderstanding. Moreover, some recommendation of how this data and metadata should be encoded are included on the paper, making special emphasis on “ISO-19115-1: Fundamentals” and “ISO-19115-2: Extensions for imagery and gridded data”, “ISO-19139: XML schema implementation” and “ISO-19157: Data quality” (quality elements). Proposals related to quality derived from the GeoViQua FP7 project are also indicated.

  9. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  10. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  11. Curation and integration of observational metadata in ADS

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto

    2015-08-01

    This presentation discusses the current curation of archive bibliographies and their indexing in the NASA Astrophysics Data System (ADS). Integration of these bibliographies provides convenient cross-linking of resources between ADS and the data archives, affording greater visibility to both data products and the literature associated with them. There are practical incentives behind this effort: it has been shown that astronomy articles which provide links to on-line datasets have a citation advantage over similar articles which don’t link to data. Additionally, the availability of paper-based metrics makes it possible for archivists and program managers use them in order to assess the impact of an instrument, facility, funding or observing program.The primary data curated by ADS is bibliographic information provided by publishers or harvested by ADS from conference proceeding sites and repositories. This core bibliographic information is then further enriched by ADS via the generation of citations and usage data, and through the aggregation of external bibliographic information. Important sources of such additional information are the metadata describing observing proposals from the major missions and archives, the curated bibliographies for data centers, and the sets of links between archival observations and published papers.While ADS solicits and welcomes the inclusion of this data from US and foreign data centers, the curation of bibliographies, observing proposals and links to data products is left to the archives which host the data and which have the expertise and resources to properly maintain them. In this regard, the role of ADS is one of resource aggregation through crowdsourced curation, providing a lightweight discovery mechanism through its search capabilities. While limited in scope, this level of aggregation can still be quite useful in supporting the discovery and selection of data products associated with publications. For instance, a user can

  12. Attracting Girls into Physics (abstract)

    NASA Astrophysics Data System (ADS)

    Gadalla, Afaf

    2009-04-01

    A recent international study of women in physics showed that enrollment in physics and science is declining for both males and females and that women are severely underrepresented in careers requiring a strong physics background. The gender gap begins early in the pipeline, from the first grade. Girls are treated differently than boys at home and in society in ways that often hinder their chances for success. They have fewer freedoms, are discouraged from accessing resources or being adventurous, have far less exposure to problem solving, and are not encouraged to choose their lives. In order to motivate more girl students to study physics in the Assiut governorate of Egypt, the Assiut Alliance for the Women and Assiut Education District collaborated in renovating the education of physics in middle and secondary school classrooms. A program that helps in increasing the number of girls in science and physics has been designed in which informal groupings are organized at middle and secondary schools to involve girls in the training and experiences needed to attract and encourage girls to learn physics. During implementation of the program at some schools, girls, because they had not been trained in problem-solving as boys, appeared not to be as facile in abstracting the ideas of physics, and that was the primary reason for girls dropping out of science and physics. This could be overcome by holding a topical physics and technology summer school under the supervision of the Assiut Alliance for the Women.

  13. 1986 annual information meeting. Abstracts

    SciTech Connect

    Not Available

    1986-01-01

    Abstracts are presented for the following papers: Geohydrological Research at the Y-12 Plant (C.S. Haase); Ecological Impacts of Waste Disposal Operations in Bear Creek Valley Near the Y-12 Plant (J.M. Loar); Finite Element Simulation of Subsurface Contaminant Transport: Logistic Difficulties in Handling Large Field Problems (G.T. Yeh); Dynamic Compaction of a Radioactive Waste Burial Trench (B.P. Spalding); Comparative Evaluation of Potential Sites for a High-Level Radioactive Waste Repository (E.D. Smith); Changing Priorities in Environmental Assessment and Environmental Compliance (R.M. Reed); Ecology, Ecotoxicology, and Ecological Risk Assessment (L.W. Barnthouse); Theory and Practice in Uncertainty Analysis from Ten Years of Practice (R.H. Gardner); Modeling Landscape Effects of Forest Decline (V.H. Dale); Soil Nitrogen and the Global Carbon Cycle (W.M. Post); Maximizing Wood Energy Production in Short-Rotation Plantations: Effect of Initial Spacing and Rotation Length (L.L. Wright); and Ecological Communities and Processes in Woodland Streams Exhibit Both Direct and Indirect Effects of Acidification (J.W. Elwood).

  14. Ozone Conference II: Abstract Proceedings

    SciTech Connect

    1999-11-01

    Ozone Conference II: Pre- and Post-Harvest Applications Two Years After Gras, was held September 27-28, 1999 in Tulare, California. This conference, sponsored by EPRI's Agricultural Technology Alliance and Southern California Edison's AgTAC facility, was coordinated and organized by the on-site ATA-AgTAC Regional Center. Approximately 175 people attended the day-and-a-half conference at AgTAC. During the Conference twenty-two presentations were given on ozone food processing and agricultural applications. Included in the presentations were topics on: (1) Ozone fumigation; (2) Ozone generation techniques; (3) System and design applications; (4) Prewater treatment requirements; (5) Poultry water reuse; (6) Soil treatments with ozone gas; and (7) Post-harvest aqueous and gaseous ozone research results. A live videoconference between Tulare and Washington, D.C. was held to discuss the regulators' view from inside the beltway. Attendees participated in two Roundtable Question and Answer sessions and visited fifteen exhibits and demonstrations. The attendees included university and governmental researchers, regulators, consultants and industry experts, technology developers and providers, and corporate and individual end-users. This report is comprised of the Abstracts of each presentation, biographical sketches for each speaker and a registration/attendees list.

  15. Handedness shapes children's abstract concepts.

    PubMed

    Casasanto, Daniel; Henetz, Tania

    2012-03-01

    Can children's handedness influence how they represent abstract concepts like kindness and intelligence? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on a diagram a preferred toy and a dispreferred toy should go. Right-handers tended to assign the preferred toy to a box on the right and the dispreferred toy to a box on the left. Left-handers showed the opposite pattern. In a second experiment, children judged which of two cartoon animals looked smarter (or dumber) or nicer (or meaner). Right-handers attributed more positive qualities to animals on the right, but left-handers to animals on the left. These contrasting associations between space and valence cannot be explained by exposure to language or cultural conventions, which consistently link right with good. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they can act more fluently with their dominant hands. Results support the body-specificity hypothesis (Casasanto, 2009), showing that children with different kinds of bodies think differently in corresponding ways. PMID:21916951

  16. An abstract approach to music.

    SciTech Connect

    Kaper, H. G.; Tipei, S.

    1999-04-19

    In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize a composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.

  17. Managing data warehouse metadata using the Web: A Web-based DBA maintenance tool suite

    SciTech Connect

    Yow, T.; Grubb, J.; Jennings, S.

    1998-12-31

    The Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC), which is associated with NASA`s Earth Observing System Data and Information System (EOSDIS), provides access to datasets used in environmental research. As a data warehouse for NASA, the ORNL DAAC archives and distributes data from NASA`s ground-based field experiments. In order to manage its large and diverse data holdings, the DAAC has mined metadata that is stored in several Sybase databases. However, the task of managing the metadata itself has become such a complicated task that the DAAC has developed a Web-based Graphical User Interface (GUI) called the DBA maintenance Tool Suite. This Web-based tool allows the DBA to maintain the DAAC`s metadata databases with the click of a mouse button. This tool greatly reduces the complexities of database maintenance and facilitates the task of data delivery to the DAAC`s user community.

  18. High-performance metadata indexing and search in petascale data storage systems

    NASA Astrophysics Data System (ADS)

    Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.

    2008-07-01

    Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.

  19. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    NASA Astrophysics Data System (ADS)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible

  20. LDEF: A bibliography with abstracts

    NASA Technical Reports Server (NTRS)

    Gouger, H. Garland (Editor)

    1992-01-01

    The Long Duration Exposure Facility (LDEF) was a free-flying cylindrical structure that housed self-contained experiments in trays mounted on the exterior of the structure. Launched into orbit from the Space Shuttle Challenger in 1984, the LDEF spent almost six years in space before being recovered in 1990. The 57 experiments investigated the effects of the low earth orbit environment on materials, coatings, electronics, thermal systems, seeds, and optics. It also carried experiments that measured crystals growth, cosmic radiation, and micrometeoroids. This bibliography contains 435 selected records from the NASA aerospace database covering the years 1973 through June of 1992. The citations are arranged within subject categories by author and date of publication.

  1. NCPP's Use of Standard Metadata to Promote Open and Transparent Climate Modeling

    NASA Astrophysics Data System (ADS)

    Treshansky, A.; Barsugli, J. J.; Guentchev, G.; Rood, R. B.; DeLuca, C.

    2012-12-01

    The National Climate Predictions and Projections (NCPP) Platform is developing comprehensive regional and local information about the evolving climate to inform decision making and adaptation planning. This includes both creating and providing tools to create metadata about the models and processes used to create its derived data products. NCPP is using the Common Information Model (CIM), an ontology developed by a broad set of international partners in climate research, as its metadata language. This use of a standard ensures interoperability within the climate community as well as permitting access to the ecosystem of tools and services emerging alongside the CIM. The CIM itself is divided into a general-purpose (UML & XML) schema which structures metadata documents, and a project or community-specific (XML) Controlled Vocabulary (CV) which constraints the content of metadata documents. NCPP has already modified the CIM Schema to accommodate downscaling models, simulations, and experiments. NCPP is currently developing a CV for use by the downscaling community. Incorporating downscaling into the CIM will lead to several benefits: easy access to the existing CIM Documents describing CMIP5 models and simulations that are being downscaled, access to software tools that have been developed in order to search, manipulate, and visualize CIM metadata, and coordination with national and international efforts such as ES-DOC that are working to make climate model descriptions and datasets interoperable. Providing detailed metadata descriptions which include the full provenance of derived data products will contribute to making that data (and, the models and processes which generated that data) more open and transparent to the user community.

  2. Experiments with Metadata-Derived Initial Values and Linesearch Bundle Adjustment in Architectural Photogrammetry

    NASA Astrophysics Data System (ADS)

    Börlin, N.; Grussenmeyer, P.

    2013-07-01

    According to the Waldhäusl and Ogleby (1994) "3 x 3 rules", a well-designed close-range architectural photogrammetric project should include a sketch of the project site with the approximate position and viewing direction of each image. This orientation metadata is important to determine which part of the object each image covers. In principle, the metadata could be used as initial values for the camera external orientation (EO) parameters. However, this has rarely been used, partly due to convergence problem for the bundle adjustment procedure. In this paper we present a photogrammetric reconstruction pipeline based on classical methods and investigate if and how the linesearch bundle algorithm of Börlin et al. (2004) and/or metadata can be used to aid the reconstruction process in architectural photogrammetry when the classical methods fail. The primary initial values for the bundle are calculated by the five-point algorithm by Nistér (Stewénius et al., 2006). Should the bundle fail, initial values derived from metadata are calculated and used for a second bundle attempt. The pipeline was evaluated on an image set of the INSA building in Strasbourg. The data set includes mixed convex and non-convex subnetworks and a combination of manual and automatic measurements. The results show that, in general, the classical bundle algorithm with five-point initial values worked well. However, in cases where it did fail, linesearch bundle and/or metadata initial values did help. The presented approach is interesting for solving EO problems when the automatic orientation processes fail as well as to simplify keeping a link between the metadata containing the plan of how the project should have become and the actual reconstructed network as it turned out to be.

  3. Annotating user-defined abstractions for optimization

    SciTech Connect

    Quinlan, D; Schordan, M; Vuduc, R; Yi, Q

    2005-12-05

    This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.

  4. Abstraction and reformulation in artificial intelligence.

    PubMed Central

    Holte, Robert C.; Choueiry, Berthe Y.

    2003-01-01

    This paper contributes in two ways to the aims of this special issue on abstraction. The first is to show that there are compelling reasons motivating the use of abstraction in the purely computational realm of artificial intelligence. The second is to contribute to the overall discussion of the nature of abstraction by providing examples of the abstraction processes currently used in artificial intelligence. Although each type of abstraction is specific to a somewhat narrow context, it is hoped that collectively they illustrate the richness and variety of abstraction in its fullest sense. PMID:12903653

  5. OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian

    2016-04-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the

  6. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge

  7. A reference data model of a metadata registry preserving semantics and representations of data elements.

    PubMed

    Löpprich, Martin; Jones, Jennifer; Meinecke, Marie-Claire; Goldschmidt, Hartmut; Knaup, Petra

    2014-01-01

    Integration and analysis of clinical data collected in multiple data sources over a long period of time is a major challenge even when data warehouses and metadata registries are used. Since most metadata registries focus on describing data elements to establish domain consistent data definition and providing item libraries, hierarchical and temporal dependencies cannot be mapped. Therefore we developed and validated a reference data model, based on ISO/IEC 11179, which allows revision and branching control of conceptually similar data elements with heterogeneous definitions and representations.

  8. Metadata and Buckets in the Smart Object, Dumb Archive (SODA) Model

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maly, Kurt; Croom, Delwin R., Jr.; Robbins, Steven W.

    2004-01-01

    We present the Smart Object, Dumb Archive (SODA) model for digital libraries (DLs), and discuss the role of metadata in SODA. The premise of the SODA model is to "push down" many of the functionalities generally associated with archives into the data objects themselves. Thus the data objects become "smarter", and the archives "dumber". In the SODA model, archives become primarily set managers, and the objects themselves negotiate and handle presentation, enforce terms and conditions, and perform data content management. Buckets are our implementation of smart objects, and da is our reference implementation for dumb archives. We also present our approach to metadata translation for buckets.

  9. Scientific meeting abstracts: significance, access, and trends.

    PubMed Central

    Kelly, J A

    1998-01-01

    Abstracts of scientific papers and posters that are presented at annual scientific meetings of professional societies are part of the broader category of conference literature. They are an important avenue for the dissemination of current data. While timely and succinct, these abstracts present problems such as an abbreviated peer review and incomplete bibliographic access. METHODS: Seventy societies of health sciences professionals were surveyed about the publication of abstracts from their annual meetings. Nineteen frequently cited journals also were contacted about their policies on the citation of meeting abstracts. Ten databases were searched for the presence of meetings abstracts. RESULTS: Ninety percent of the seventy societies publish their abstracts, with nearly half appearing in the society's journal. Seventy-seven percent of the societies supply meeting attendees with a copy of each abstract, and 43% make their abstracts available in an electronic format. Most of the journals surveyed allow meeting abstracts to be cited. Bibliographic access to these abstracts does not appear to be widespread. CONCLUSIONS: Meeting abstracts play an important role in the dissemination of scientific knowledge. Bibliographic access to meeting abstracts is very limited. The trend toward making meeting abstracts available via the Internet has the potential to give a broader audience access to the information they contain. PMID:9549015

  10. An Examination of the Adoption of Preservation Metadata in Cultural Heritage Institutions: An Exploratory Study Using Diffusion of Innovations Theory

    ERIC Educational Resources Information Center

    Alemneh, Daniel Gelaw

    2009-01-01

    Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…

  11. An algorithm for generating abstract syntax trees

    NASA Technical Reports Server (NTRS)

    Noonan, R. E.

    1985-01-01

    The notion of an abstract syntax is discussed. An algorithm is presented for automatically deriving an abstract syntax directly from a BNF grammar. The implementation of this algorithm and its application to the grammar for Modula are discussed.

  12. Abstracted model for ceramic coating

    SciTech Connect

    Farmer, J C; Stockman, C

    1998-11-14

    Engineers are exploring several mechanisms to delay corrosive attack of the CAM (corrosion allowance material) by dripping water, including drip shields and ceramic coatings. Ceramic coatings deposited with high-velocity oxyfuels (HVOF's) have exhibited a porosity of only 2% at a thickness of 0.15 cm. The primary goal of this document is to provide a detailed description of an abstracted process-level model for Total System Performance Assessment (TSPA) that has been developed to account for the inhibition of corrosion by protective ceramic coatings. A second goal was to address as many of the issues raised during a recent peer review as possible (direct reaction of liquid water with carbon steel, stress corrosion cracking of the ceramic coating, bending stresses in coatings of finite thickness, limitations of simple correction factors, etc.). During the periods of dry oxidation (T ≥ 100°C) and humid-air corrosion (T ≤ 100°C & RH < 8O%), it is assumed that the growth rate of oxide on the surface is diminished in proportion to the surface covered by solid ceramic. The mass transfer impedance imposed by a ceramic coating with gas-filled pores is assumed to be negligible. During the period of aqueous phase corrosion (T ≤ 100°C & RH ≥ 80%), it is assumed that the overall mass transfer resistance governing the corrosion rate is due to the combined resistance of ceramic coating & interfacial corrosion products. Two porosity models (simple cylinder & cylinder-sphere chain) are considered in estimation of the mass transfer resistance of the ceramic coating. It is evident that substantial impedance to 0₂ transport is encountered if pores are filled with liquid water. It may be possible to use a sealant to eliminate porosity. Spallation (rupture) of the ceramic coating is assumed to occur if the stress introduced by the expanding corrosion products at the ceramic- CAM interface exceeds fracture stress. Since this model does not account for the possibility of

  13. Temporal abstraction-based clinical phenotyping with Eureka!

    PubMed

    Post, Andrew R; Kurc, Tahsin; Willard, Richie; Rathod, Himanshu; Mansour, Michel; Pai, Akshatha Kalsanka; Torian, William M; Agravat, Sanjay; Sturm, Suzanne; Saltz, Joel H

    2013-01-01

    Temporal abstraction, a method for specifying and detecting temporal patterns in clinical databases, is very expressive and performs well, but it is difficult for clinical investigators and data analysts to understand. Such patterns are critical in phenotyping patients using their medical records in research and quality improvement. We have previously developed the Analytic Information Warehouse (AIW), which computes such phenotypes using temporal abstraction but requires software engineers to use. We have extended the AIW's web user interface, Eureka! Clinical Analytics, to support specifying phenotypes using an alternative model that we developed with clinical stakeholders. The software converts phenotypes from this model to that of temporal abstraction prior to data processing. The model can represent all phenotypes in a quality improvement project and a growing set of phenotypes in a multi-site research study. Phenotyping that is accessible to investigators and IT personnel may enable its broader adoption. PMID:24551400

  14. Temporal Abstraction-based Clinical Phenotyping with Eureka!

    PubMed Central

    Post, Andrew R.; Kurc, Tahsin; Willard, Richie; Rathod, Himanshu; Mansour, Michel; Pai, Akshatha Kalsanka; Torian, William M.; Agravat, Sanjay; Sturm, Suzanne; Saltz, Joel H.

    2013-01-01

    Temporal abstraction, a method for specifying and detecting temporal patterns in clinical databases, is very expressive and performs well, but it is difficult for clinical investigators and data analysts to understand. Such patterns are critical in phenotyping patients using their medical records in research and quality improvement. We have previously developed the Analytic Information Warehouse (AIW), which computes such phenotypes using temporal abstraction but requires software engineers to use. We have extended the AIW’s web user interface, Eureka! Clinical Analytics, to support specifying phenotypes using an alternative model that we developed with clinical stakeholders. The software converts phenotypes from this model to that of temporal abstraction prior to data processing. The model can represent all phenotypes in a quality improvement project and a growing set of phenotypes in a multi-site research study. Phenotyping that is accessible to investigators and IT personnel may enable its broader adoption. PMID:24551400

  15. Temporal abstraction-based clinical phenotyping with Eureka!

    PubMed

    Post, Andrew R; Kurc, Tahsin; Willard, Richie; Rathod, Himanshu; Mansour, Michel; Pai, Akshatha Kalsanka; Torian, William M; Agravat, Sanjay; Sturm, Suzanne; Saltz, Joel H

    2013-01-01

    Temporal abstraction, a method for specifying and detecting temporal patterns in clinical databases, is very expressive and performs well, but it is difficult for clinical investigators and data analysts to understand. Such patterns are critical in phenotyping patients using their medical records in research and quality improvement. We have previously developed the Analytic Information Warehouse (AIW), which computes such phenotypes using temporal abstraction but requires software engineers to use. We have extended the AIW's web user interface, Eureka! Clinical Analytics, to support specifying phenotypes using an alternative model that we developed with clinical stakeholders. The software converts phenotypes from this model to that of temporal abstraction prior to data processing. The model can represent all phenotypes in a quality improvement project and a growing set of phenotypes in a multi-site research study. Phenotyping that is accessible to investigators and IT personnel may enable its broader adoption.

  16. Writing a Structured Abstract for the Thesis

    ERIC Educational Resources Information Center

    Hartley, James

    2010-01-01

    This article presents the author's suggestions on how to improve thesis abstracts. The author describes two books on writing abstracts: (1) "Creating Effective Conference Abstracts and Posters in Biomedicine: 500 tips for Success" (Fraser, Fuller & Hutber, 2009), a compendium of clear advice--a must book to have in one's hand as one prepares a…

  17. 37 CFR 1.438 - The abstract.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2011-07-01 2011-07-01 false The abstract. 1.438 Section 1... COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES International Processing Provisions The International Application § 1.438 The abstract. (a) Requirements as to the content and form of the abstract are set forth...

  18. 37 CFR 1.438 - The abstract.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false The abstract. 1.438 Section 1... COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES International Processing Provisions The International Application § 1.438 The abstract. (a) Requirements as to the content and form of the abstract are set forth...

  19. On abstract degenerate neutral differential equations

    NASA Astrophysics Data System (ADS)

    Hernández, Eduardo; O'Regan, Donal

    2016-10-01

    We introduce a new abstract model of functional differential equations, which we call abstract degenerate neutral differential equations, and we study the existence of strict solutions. The class of problems and the technical approach introduced in this paper allow us to generalize and extend recent results on abstract neutral differential equations. Some examples on nonlinear partial neutral differential equations are presented.

  20. At the HeART of Abstraction

    ERIC Educational Resources Information Center

    Berdit, Nancy

    2006-01-01

    Abstraction has long been a concept difficult to define for students. Students often feel the pressure of making their artwork "look real" and frustration can often lead to burnout in the classroom. In this article, the author describes how her lesson on abstraction has alleviated much of that pressure as students created an abstract acrylic…

  1. Functional Gene Group Summarization by Clustering MEDLINE Abstract Sentences

    PubMed Central

    Yang, Jianji; Cohen, Aaron M.; Hersh, William R.

    2006-01-01

    Tools to automatically summarize functional gene group information from the biomedical literature will help genomics researchers both better interpret gene expression data and understand biological pathways. In this study, we built a system that takes in a set of genes and MEDLINE records and outputs clusters of genes along with summaries of each cluster by sentence extraction from MEDLINE abstracts. Our preliminary use-case evaluation shows that this approach can identify gene clusters similar to manually generated groupings. PMID:17238770

  2. Abstract models of molecular walkers

    NASA Astrophysics Data System (ADS)

    Semenov, Oleg

    Recent advances in single-molecule chemistry have led to designs for artificial multi-pedal walkers that follow tracks of chemicals. The walkers, called molecular spiders, consist of a rigid chemically inert body and several flexible enzymatic legs. The legs can reversibly bind to chemical substrates on a surface, and through their enzymatic action convert them to products. We study abstract models of molecular spiders to evaluate how efficiently they can perform two tasks: molecular transport of cargo over tracks and search for targets on finite surfaces. For the single-spider model our simulations show a transient behavior wherein certain spiders move superdiffusively over significant distances and times. This gives the spiders potential as a faster-than-diffusion transport mechanism. However, analysis shows that single-spider motion eventually decays into an ordinary diffusive motion, owing to the ever increasing size of the region of products. Inspired by cooperative behavior of natural molecular walkers, we propose a symmetric exclusion process (SEP) model for multiple walkers interacting as they move over a one-dimensional lattice. We show that when walkers are sequentially released from the origin, the collective effect is to prevent the leading walkers from moving too far backwards. Hence, there is an effective outward pressure on the leading walkers that keeps them moving superdiffusively for longer times. Despite this improvement the leading spider eventually slows down and moves diffusively, similarly to a single spider. The slowdown happens because all spiders behind the leading spiders never encounter substrates, and thus they are never biased. They cannot keep up with leading spiders, and cannot put enough pressure on them. Next, we investigate search properties of a single and multiple spiders moving over one- and two-dimensional surfaces with various absorbing and reflecting boundaries. For the single-spider model we evaluate by how much the

  3. Research & writing basics: elements of the abstract.

    PubMed

    Krasner, D; Van Rijswijk, L

    1995-04-01

    Writing an abstract is a challenging skill that requires precision and care. Criteria for well-formulated abstracts and abstract guidelines for 2 types of articles (empirical studies and reviews or theoretical articles) as well as a description of the content of a structured abstract are presented. Details were gleaned from a review of the literature including the American Medical Association Manual of Style, Eighth Edition and the Publication Manual of the American Psychological Association, Fourth Edition. A good abstract is like a crystal: it is a clear, sharp synthesis that elucidates meaning for the reader.

  4. Research & writing basics: elements of the abstract.

    PubMed

    Krasner, D; Van Rijswijk, L

    1995-04-01

    Writing an abstract is a challenging skill that requires precision and care. Criteria for well-formulated abstracts and abstract guidelines for 2 types of articles (empirical studies and reviews or theoretical articles) as well as a description of the content of a structured abstract are presented. Details were gleaned from a review of the literature including the American Medical Association Manual of Style, Eighth Edition and the Publication Manual of the American Psychological Association, Fourth Edition. A good abstract is like a crystal: it is a clear, sharp synthesis that elucidates meaning for the reader. PMID:7546111

  5. Scaling the walls of discovery: using semantic metadata for integrative problem solving.

    PubMed

    Manning, Maurice; Aggarwal, Amit; Gao, Kevin; Tucker-Kellogg, Greg

    2009-03-01

    Current data integration approaches by bioinformaticians frequently involve extracting data from a wide variety of public and private data repositories, each with a unique vocabulary and schema, via scripts. These separate data sets must then be normalized through the tedious and lengthy process of resolving naming differences and collecting information into a single view. Attempts to consolidate such diverse data using data warehouses or federated queries add significant complexity and have shown limitations in flexibility. The alternative of complete semantic integration of data requires a massive, sustained effort in mapping data types and maintaining ontologies. We focused instead on creating a data architecture that leverages semantic mapping of experimental metadata, to support the rapid prototyping of scientific discovery applications with the twin goals of reducing architectural complexity while still leveraging semantic technologies to provide flexibility, efficiency and more fully characterized data relationships. A metadata ontology was developed to describe our discovery process. A metadata repository was then created by mapping metadata from existing data sources into this ontology, generating RDF triples to describe the entities. Finally an interface to the repository was designed which provided not only search and browse capabilities but complex query templates that aggregate data from both RDF and RDBMS sources. We describe how this approach (i) allows scientists to discover and link relevant data across diverse data sources and (ii) provides a platform for development of integrative informatics applications.

  6. There's Trouble in Paradise: Problems with Educational Metadata Encountered during the MALTED Project.

    ERIC Educational Resources Information Center

    Monthienvichienchai, Rachada; Sasse, M. Angela; Wheeldon, Richard

    This paper investigates the usability of educational metadata schemas with respect to the case of the MALTED (Multimedia Authoring Language Teachers and Educational Developers) project at University College London (UCL). The project aims to facilitate authoring of multimedia materials for language learning by allowing teachers to share multimedia…

  7. A Meta-Data Driven Approach to Searching for Educational Resources in a Global Context.

    ERIC Educational Resources Information Center

    Wade, Vincent P.; Doherty, Paul

    This paper presents the design of an Internet-enabled search service that supports educational resource discovery within an educational brokerage service. More specifically, it presents the design and implementation of a metadata-driven approach to implementing the distributed search and retrieval of Internet-based educational resources and…

  8. The Semantic Mapping of Archival Metadata to the CIDOC CRM Ontology

    ERIC Educational Resources Information Center

    Bountouri, Lina; Gergatsoulis, Manolis

    2011-01-01

    In this article we analyze the main semantics of archival description, expressed through Encoded Archival Description (EAD). Our main target is to map the semantics of EAD to the CIDOC Conceptual Reference Model (CIDOC CRM) ontology as part of a wider integration architecture of cultural heritage metadata. Through this analysis, it is concluded…

  9. Semantics for E-Learning: An Advanced Knowledge Management Oriented Metadata Schema for Learning Purposes.

    ERIC Educational Resources Information Center

    Lytras, Miltiadis D.

    The research described in this paper is concentrated on the demand for high quality interchangeable knowledge objects capable of supporting dynamic learning initiatives. The general metadata models (Dublin Core, IMS, LOM, SCORM) for knowledge objects enrichment are reviewed and a critique is provided in order to claim the importance of the…

  10. The Making of the Open Archives Initiative Protocol for Metadata Harvesting.

    ERIC Educational Resources Information Center

    Lagoze Carl; Van de Sompel, Herbert

    2003-01-01

    Explores factors in the history of the Open Archives Initiative (OAI) that have contributed to the positive response of the digital library and information community toward version 2 of the OAI Protocol for Metadata Harvesting. Factors include focus on a defined problem statement, an operational model in which strong leadership is balanced with…

  11. 77 FR 33739 - Announcement of Requirements and Registration for “Health Data Platform Metadata Challenge”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... HUMAN SERVICES Announcement of Requirements and Registration for ``Health Data Platform Metadata... the utility and usability of a broad range of health and human service data. HDP will deliver greater potential for new data driven insights into complex interactions of health and health care services....

  12. A Metadata Model for E-Learning Coordination through Semantic Web Languages

    ERIC Educational Resources Information Center

    Elci, Atilla

    2005-01-01

    This paper reports on a study aiming to develop a metadata model for e-learning coordination based on semantic web languages. A survey of e-learning modes are done initially in order to identify content such as phases, activities, data schema, rules and relations, etc. relevant for a coordination model. In this respect, the study looks into the…

  13. Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.

    ERIC Educational Resources Information Center

    Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi

    2002-01-01

    Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…

  14. Towards more transparent and reproducible omics studies through a common metadata checklist and data publications

    SciTech Connect

    Kolker, Eugene; Ozdemir, Vural; Martens , Lennart; Hancock, William S.; Anderson, Gordon A.; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R.; Chen, Rui; Choiniere, John; Dearth, Stephen P.; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W.; Kohane, Isaac; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marhsall, Jean-Claude; Masuzzo, Paolo; May, Amanda; Mias, George; Monroe, Matthew E.; Montague, Elizabeth; Monney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V.; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H.; Warnich, Louise; Wilhelm, Steven W.; Yandl, Gregory

    2014-01-01

    Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies omics studies are becoming increasingly prevalent yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research,. These three essential steps require consistent generation, capture, and distribution of the metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. This omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.

  15. 76 FR 48769 - Metadata Standards To Support Nationwide Electronic Health Information Exchange

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-09

    ... HUMAN SERVICES Office of the Secretary 45 CFR Part 170 RIN 0991-AB78 Metadata Standards To Support... Information Technology, Department of Health and Human Services. ACTION: Advance notice of proposed rulemaking... received by Midnight Eastern Time on September 23, 2011 as the Federal Docket Management System will...

  16. The Role of Evaluative Metadata in an Online Teacher Resource Exchange

    ERIC Educational Resources Information Center

    Abramovich, Samuel; Schunn, Christian D.; Correnti, Richard J.

    2013-01-01

    A large-scale online teacher resource exchange is studied to examine the ways in which metadata influence teachers' selection of resources. A hierarchical linear modeling approach was used to tease apart the simultaneous effects of resource features and author features. From a decision heuristics theoretical perspective, teachers appear to…

  17. Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.

    2015-09-01

    Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.

  18. Toward More Transparent and Reproducible Omics Studies Through a Common Metadata Checklist and Data Publications.

    PubMed

    Kolker, Eugene; Özdemir, Vural; Martens, Lennart; Hancock, William; Anderson, Gordon; Anderson, Nathaniel; Aynacioglu, Sukru; Baranova, Ancha; Campagna, Shawn R; Chen, Rui; Choiniere, John; Dearth, Stephen P; Feng, Wu-Chun; Ferguson, Lynnette; Fox, Geoffrey; Frishman, Dmitrij; Grossman, Robert; Heath, Allison; Higdon, Roger; Hutz, Mara H; Janko, Imre; Jiang, Lihua; Joshi, Sanjay; Kel, Alexander; Kemnitz, Joseph W; Kohane, Isaac S; Kolker, Natali; Lancet, Doron; Lee, Elaine; Li, Weizhong; Lisitsa, Andrey; Llerena, Adrian; MacNealy-Koch, Courtney; Marshall, Jean-Claude; Masuzzo, Paola; May, Amanda; Mias, George; Monroe, Matthew; Montague, Elizabeth; Mooney, Sean; Nesvizhskii, Alexey; Noronha, Santosh; Omenn, Gilbert; Rajasimha, Harsha; Ramamoorthy, Preveen; Sheehan, Jerry; Smarr, Larry; Smith, Charles V; Smith, Todd; Snyder, Michael; Rapole, Srikanth; Srivastava, Sanjeeva; Stanberry, Larissa; Stewart, Elizabeth; Toppo, Stefano; Uetz, Peter; Verheggen, Kenneth; Voy, Brynn H; Warnich, Louise; Wilhelm, Steven W; Yandl, Gregory

    2013-12-01

    Biological processes are fundamentally driven by complex interactions between biomolecules. Integrated high-throughput omics studies enable multifaceted views of cells, organisms, or their communities. With the advent of new post-genomics technologies, omics studies are becoming increasingly prevalent; yet the full impact of these studies can only be realized through data harmonization, sharing, meta-analysis, and integrated research. These essential steps require consistent generation, capture, and distribution of metadata. To ensure transparency, facilitate data harmonization, and maximize reproducibility and usability of life sciences studies, we propose a simple common omics metadata checklist. The proposed checklist is built on the rich ontologies and standards already in use by the life sciences community. The checklist will serve as a common denominator to guide experimental design, capture important parameters, and be used as a standard format for stand-alone data publications. The omics metadata checklist and data publications will create efficient linkages between omics data and knowledge-based life sciences innovation and, importantly, allow for appropriate attribution to data generators and infrastructure science builders in the post-genomics era. We ask that the life sciences community test the proposed omics metadata checklist and data publications and provide feedback for their use and improvement.

  19. CD Recorders.

    ERIC Educational Resources Information Center

    Falk, Howard

    1998-01-01

    Discussion of CD (compact disc) recorders describes recording applications, including storing large graphic files, creating audio CDs, and storing material downloaded from the Internet; backing up files; lifespan; CD recording formats; continuous recording; recording software; recorder media; vulnerability of CDs; basic computer requirements; and…

  20. Automatic publishing ISO 19115 metadata with PanMetaDocs using SensorML information

    NASA Astrophysics Data System (ADS)

    Stender, Vivien; Ulbricht, Damian; Schroeder, Matthias; Klump, Jens

    2014-05-01

    Terrestrial Environmental Observatories (TERENO) is an interdisciplinary and long-term research project spanning an Earth observation network across Germany. It includes four test sites within Germany from the North German lowlands to the Bavarian Alps and is operated by six research centers of the Helmholtz Association. The contribution by the participating research centers is organized as regional observatories. A challenge for TERENO and its observatories is to integrate all aspects of data management, data workflows, data modeling and visualizations into the design of a monitoring infrastructure. TERENO Northeast is one of the sub-observatories of TERENO and is operated by the German Research Centre for Geosciences (GFZ) in Potsdam. This observatory investigates geoecological processes in the northeastern lowland of Germany by collecting large amounts of environmentally relevant data. The success of long-term projects like TERENO depends on well-organized data management, data exchange between the partners involved and on the availability of the captured data. Data discovery and dissemination are facilitated not only through data portals of the regional TERENO observatories but also through a common spatial data infrastructure TEODOOR (TEreno Online Data repOsitORry). TEODOOR bundles the data, provided by the different web services of the single observatories, and provides tools for data discovery, visualization and data access. The TERENO Northeast data infrastructure integrates data from more than 200 instruments and makes data available through standard web services. Geographic sensor information and services are described using the ISO 19115 metadata schema. TEODOOR accesses the OGC Sensor Web Enablement (SWE) interfaces offered by the regional observatories. In addition to the SWE interface, TERENO Northeast also published data through DataCite. The necessary metadata are created in an automated process by extracting information from the SWE SensorML to

  1. Evolution of Web Services in EOSDIS: Search and Order Metadata Registry (ECHO)

    NASA Technical Reports Server (NTRS)

    Mitchell, Andrew; Ramapriyan, Hampapuram; Lowe, Dawn

    2009-01-01

    During 2005 through 2008, NASA defined and implemented a major evolutionary change in it Earth Observing system Data and Information System (EOSDIS) to modernize its capabilities. This implementation was based on a vision for 2015 developed during 2005. The EOSDIS 2015 Vision emphasizes increased end-to-end data system efficiency and operability; increased data usability; improved support for end users; and decreased operations costs. One key feature of the Evolution plan was achieving higher operational maturity (ingest, reconciliation, search and order, performance, error handling) for the NASA s Earth Observing System Clearinghouse (ECHO). The ECHO system is an operational metadata registry through which the scientific community can easily discover and exchange NASA's Earth science data and services. ECHO contains metadata for 2,726 data collections comprising over 87 million individual data granules and 34 million browse images, consisting of NASA s EOSDIS Data Centers and the United States Geological Survey's Landsat Project holdings. ECHO is a middleware component based on a Service Oriented Architecture (SOA). The system is comprised of a set of infrastructure services that enable the fundamental SOA functions: publish, discover, and access Earth science resources. It also provides additional services such as user management, data access control, and order management. The ECHO system has a data registry and a services registry. The data registry enables organizations to publish EOS and other Earth-science related data holdings to a common metadata model. These holdings are described through metadata in terms of datasets (types of data) and granules (specific data items of those types). ECHO also supports browse images, which provide a visual representation of the data. The published metadata can be mapped to and from existing standards (e.g., FGDC, ISO 19115). With ECHO, users can find the metadata stored in the data registry and then access the data either

  2. CHARMe Commentary metadata for Climate Science: collecting, linking and sharing user feedback on climate datasets

    NASA Astrophysics Data System (ADS)

    Blower, Jon; Lawrence, Bryan; Kershaw, Philip; Nagni, Maurizio

    2014-05-01

    The research process can be thought of as an iterative activity, initiated based on prior domain knowledge, as well on a number of external inputs, and producing a range of outputs including datasets, studies and peer reviewed publications. These outputs may describe the problem under study, the methodology used, the results obtained, etc. In any new publication, the author may cite or comment other papers or datasets in order to support their research hypothesis. However, as their work progresses, the researcher may draw from many other latent channels of information. These could include for example, a private conversation following a lecture or during a social dinner; an opinion expressed concerning some significant event such as an earthquake or for example a satellite failure. In addition, other sources of information of grey literature are important public such as informal papers such as the arxiv deposit, reports and studies. The climate science community is not an exception to this pattern; the CHARMe project, funded under the European FP7 framework, is developing an online system for collecting and sharing user feedback on climate datasets. This is to help users judge how suitable such climate data are for an intended application. The user feedback could be comments about assessments, citations, or provenance of the dataset, or other information such as descriptions of uncertainty or data quality. We define this as a distinct category of metadata called Commentary or C-metadata. We link C-metadata with target climate datasets using a Linked Data approach via the Open Annotation data model. In the context of Linked Data, C-metadata plays the role of a resource which, depending on its nature, may be accessed as simple text or as more structured content. The project is implementing a range of software tools to create, search or visualize C-metadata including a JavaScript plugin enabling this functionality to be integrated in situ with data provider portals

  3. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  4. 2013 SYR Accepted Poster Abstracts.

    PubMed

    2013-01-01

    SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and

  5. Revision and documentation of the CDI metadata model as a ISO 19115 profile

    NASA Astrophysics Data System (ADS)

    Boldrini, E.; Broeren, B.; Schaap, D. M. A.; Nativi, S.; Manzella, G. M. R.

    2012-04-01

    SeaDataNet 2 is an FP7 project (grant agreement 283607), started on October 1st, 2011 for a duration of four years, which aims to upgrade the present SeaDataNet infrastructure into an operationally robust and state-of-the-art Pan-European infrastructure for ocean and marine data and metadata products. The Common Data Index (CDI) constitutes an online metadata service providing discovery and access capabilities to final users. Professional data centres, active in data collection, constitute a Pan-European network providing on-line integrated databases of standardized quality through CDI. The roadmap for the second phase of SeaDataNet include: - Realization of technical and semantic interoperability with other relevant data management systems and initiatives on behalf of science, environmental management, policy making, and economy - Definition, adoption and promotion of common data management standards to ensure the platforms interoperability. The CDI metadata model and its XML schema have in particular been the object of standardization efforts through the SeaDataNet project; the second phase will pursue the work. CDI metadata model development started in 2005; the CDI schema was based upon the then available ISO 19115 DTD. Several releases of both schema and data model documentation have followed to arrive at the 1.6 version in June 2010. This version contains important modifications and updates (e.g. GML and Service Bindings extensions, instrument multiplicity, spatial resolution and frequency, …), reflecting updated requirements and needs of the marine data community; at this time CDI should be considered a de-facto standard for marine metadata in the European region. Compliance with ISO standards has always been sought (e.g. a past work targets the use of ISO 19139 XML schema as a CDI metadata encoding). The present work aim has been to revise and formally document the existing CDI metadata model as a ISO 19115 metadata standard profile: this step was needed

  6. Intern Abstract for Spring 2016

    NASA Technical Reports Server (NTRS)

    Gibson, William

    2016-01-01

    and Warning system to acquire messages. b) Configure audio boxes. c) Grab pre-recorded audio files. d) Packetize the audio stream. A third project that was assigned to implement LED indicator modules for an Omnibus project. The Omnibus project is investigating better ways designing lighting for the interior of spacecraft-both spacecraft lighting and avionics box status lighting indication. The current scheme contains too much of the blue light spectrum that disrupts the sleep cycle. The LED indicator modules are to simulate the indicators running on a spacecraft. Lighting data will be gathered by human factors personal and use in a model underdevelopment to model spacecraft lighting. Significant progress was made on this project: Designed circuit layout a) Tested LEDs at LETF. b) Created GUI for the indicators. c) Created code for the Arduino to run that will illuminate the indicator modules.

  7. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine

  8. Cross-modal integration between odors and abstract symbols.

    PubMed

    Seo, Han-Seok; Arshamian, Artin; Schemmer, Kerstin; Scheer, Ingeborg; Sander, Thorsten; Ritter, Guido; Hummel, Thomas

    2010-07-12

    This study aimed to investigate the cross-modal association of an "abstract symbol," designed for representation of an odor, with its corresponding odor. First, to explore the associations of abstract symbols with odors, participants were asked to match 8 odors with 19 different abstract symbols (Experiment 1). Next, we determined whether congruent symbols could modulate olfactory perception and olfactory event-related potentials (ERPs) (Experiment 2). One of two odors (phenylethanol (PEA) or 1-butanol) was presented with one of three conditions (congruent or incongruent symbol, no-symbol), and participants were asked to rate odor intensity and pleasantness during olfactory ERP recordings. Experiment 1 demonstrated that certain abstract symbols could be paired with specific odors. In Experiment 2 congruent symbol enhanced the intensity of PEA compared to no-symbol presentation. In addition, the respective congruent symbol increased the pleasantness of PEA and the unpleasantness of 1-butanol. Finally, compared to the incongruent symbol, the congruent symbol produced significantly higher amplitudes and shorter latencies in the N1 peak of olfactory ERPs. In conclusion, our findings demonstrated that abstract symbols may be associated with specific odors.

  9. NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases.

    PubMed

    Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin; Senger, Philipp

    2015-01-01

    Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article's supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in

  10. Metadata for numerical models of deep Earth and Earth surface processes

    NASA Astrophysics Data System (ADS)

    Kelbert, A.; Peckham, S. D.

    2014-12-01

    Model metadata aims to provide an unambiguous and complete description of a numerical model that would allow an end user scientist an immediate snapshot of the pertinent physical laws, assumptions, and numerical approximations. A rigorous metadata format that allows machine parsing of this information also makes it possible for model coupling frameworks to provide automatic and reliable semantic matching of input and output variables when models are coupled. Model metadata hinges in part on a controlled vocabulary that consists of human- and machine-readable terms that are unambiguously defined across modeling domains. The Community Surface Dynamics Modeling System (CSDMS) Standard Names are a set of generic naming conventions that have been used to generate a self-consistent controlled vocabulary for surface dynamics processes. As part of the NSF's EarthCube "Earth System Bridge" project, we extend the rich controlled vocabulary of CSDMS standard names to solid Earth modeling domains, including geodynamics, seismology, magnetotellurics, and petrology. We proceed to create a standard for Model Coupling Metadata (MCM) that is flexible enough to serve both the surface dynamics modeling community, and the deep Earth process modelers, thus bridging CSDMS and the Computational Infrastructure for Geodynamics (CIG) communities with a common semantic network. Here, we focus on our progress towards establishing an MCM standard for numerical models of solid Earth and Earth surface processes, and on the tools that facilitate creation and maintenance of such metadata. In development of the MCM standard, we leverage the Common Information Model (CIM) of the climate modeling community, as well as the NSF-funded EarthCube GeoSoft project.

  11. NeuroTransDB: highly curated and structured transcriptomic metadata for neurodegenerative diseases.

    PubMed

    Bagewadi, Shweta; Adhikari, Subash; Dhrangadhariya, Anjani; Irin, Afroza Khanam; Ebeling, Christian; Namasivayam, Aishwarya Alex; Page, Matthew; Hofmann-Apitius, Martin; Senger, Philipp

    2015-01-01

    Neurodegenerative diseases are chronic debilitating conditions, characterized by progressive loss of neurons that represent a significant health care burden as the global elderly population continues to grow. Over the past decade, high-throughput technologies such as the Affymetrix GeneChip microarrays have provided new perspectives into the pathomechanisms underlying neurodegeneration. Public transcriptomic data repositories, namely Gene Expression Omnibus and curated ArrayExpress, enable researchers to conduct integrative meta-analysis; increasing the power to detect differentially regulated genes in disease and explore patterns of gene dysregulation across biologically related studies. The reliability of retrospective, large-scale integrative analyses depends on an appropriate combination of related datasets, in turn requiring detailed meta-annotations capturing the experimental setup. In most cases, we observe huge variation in compliance to defined standards for submitted metadata in public databases. Much of the information to complete, or refine meta-annotations are distributed in the associated publications. For example, tissue preparation or comorbidity information is frequently described in an article's supplementary tables. Several value-added databases have employed additional manual efforts to overcome this limitation. However, none of these databases explicate annotations that distinguish human and animal models in neurodegeneration context. Therefore, adopting a more specific disease focus, in combination with dedicated disease ontologies, will better empower the selection of comparable studies with refined annotations to address the research question at hand. In this article, we describe the detailed development of NeuroTransDB, a manually curated database containing metadata annotations for neurodegenerative studies. The database contains more than 20 dimensions of metadata annotations within 31 mouse, 5 rat and 45 human studies, defined in

  12. An Analysis of World-Wide Contributions to "Nuclear Science Abstracts," Volume 22 (1968).

    ERIC Educational Resources Information Center

    Vaden, William M.

    Beginning with volume 20, "Nuclear Science Abstracts" (NSA) citations, exclusive of abstracts, have been recorded on magnetic tape. The articles have been categorized by 34 elements of the citations such as title, author, source, journal, report number, etc. At the time of this report more than 130,000 citations had been stored for purposes of…

  13. Distributed system for strong motion data retrieval and archiving : metadata, databases and data exchange within the NA5 framework

    NASA Astrophysics Data System (ADS)

    Pequegnat, C.; Gueguen, P.; Jacquot, R.

    2009-04-01

    The goal of the NERIES NA5 activity (http://www.neries-eu.org, Improving Accelerometric Data Access) is the development of common access to equally formatted event based accelerometric data and to the corresponding sheet of strong motion parameters. The core of the NA5 is made of 5 European institutes and the final protocol should permit other European institutes to integrate the NA5 portal. More precisely, the aim of the NA5 distributed data system is (1) to make available the data in a in specific format for the engineering community (i.e., ASCII) and in standard format for the seismological community (i.e. full SEED, SAC) and (2) to retrieve data at an unique portal on seismological and - accelerometric criteria, using relations between seismic sources and recordings and using specific parameters for the engineering community, i.e. site conditions and parameter thresholds (e.g., PGA, Ia, Duration, Sa(T), Sv(T)…). Parametric data as well as the procedures to compute them have been defined, implemented and make avalaible for all the NA5 partners. The final product will be a system based on a distributed '3 tiers' architecture, the three main nodes of which are : (1) the primary data servers of NA5 data providers, who make available waveforms (in ascii format) and the associated parameters and events-records tables, via ftp or http protocols (2) the NA5 portal, which supports metadata databases (events and stations metadata) and the associated user interfaces and webservices (3) the NA5 dataserver, the main function of which is the evaluation of the end-users requests, involving data retrieval, data conversion (sac, ascii and miniseed) and metadata formatting (sac, ascii and seed headers). Both NA5 portal and NA5 dataserver are presently under development, the former at EMSC, the latter at LGIT. Our presentation will point out the main features and resources of the NA5 dataserver : - a database of the instrument response files for the accelerometric channels

  14. Abstract Object Creation in Dynamic Logic

    NASA Astrophysics Data System (ADS)

    Ahrendt, Wolfgang; de Boer, Frank S.; Grabe, Immo

    In this paper we give a representation of a weakest precondition calculus for abstract object creation in dynamic logic, the logic underlying the KeY theorem prover. This representation allows to both specify and verify properties of objects at the abstraction level of the (object-oriented) programming language. Objects which are not (yet) created never play any role, neither in the specification nor in the verification of properties. Further, we show how to symbolically execute abstract object creation.

  15. Abstract and concrete sentences, embodiment, and languages.

    PubMed

    Scorolli, Claudia; Binkofski, Ferdinand; Buccino, Giovanni; Nicoletti, Roberto; Riggio, Lucia; Borghi, Anna Maria

    2011-01-01

    One of the main challenges of embodied theories is accounting for meanings of abstract words. The most common explanation is that abstract words, like concrete ones, are grounded in perception and action systems. According to other explanations, abstract words, differently from concrete ones, would activate situations and introspection; alternatively, they would be represented through metaphoric mapping. However, evidence provided so far pertains to specific domains. To be able to account for abstract words in their variety we argue it is necessary to take into account not only the fact that language is grounded in the sensorimotor system, but also that language represents a linguistic-social experience. To study abstractness as a continuum we combined a concrete (C) verb with both a concrete and an abstract (A) noun; and an abstract verb with the same nouns previously used (grasp vs. describe a flower vs. a concept). To disambiguate between the semantic meaning and the grammatical class of the words, we focused on two syntactically different languages: German and Italian. Compatible combinations (CC, AA) were processed faster than mixed ones (CA, AC). This is in line with the idea that abstract and concrete words are processed preferentially in parallel systems - abstract in the language system and concrete more in the motor system, thus costs of processing within one system are the lowest. This parallel processing takes place most probably within different anatomically predefined routes. With mixed combinations, when the concrete word preceded the abstract one (CA), participants were faster, regardless of the grammatical class and the spoken language. This is probably due to the peculiar mode of acquisition of abstract words, as they are acquired more linguistically than perceptually. Results confirm embodied theories which assign a crucial role to both perception-action and linguistic experience for abstract words. PMID:21954387

  16. Writing an abstract for a scientific conference.

    PubMed

    Simkhada, P; van Teijlingen, E; Hundley, V; Simkhada, B D

    2013-01-01

    For most students and junior researchers, writing an abstract for a poster or oral presentation at a conference is the first piece they may write for an audience other than their university tutors or examiners. Since some researchers struggle with this process we have put together some advice on issues to consider when writing a conference abstract. We highlight a number of issues to bear in mind when constructing one's abstract.

  17. {Semantic metadata application for information resources systematization in water spectroscopy} A.Fazliev (1), A.Privezentsev (1), J.Tennyson (2) (1) Institute of Atmospheric Optics SB RAS, Tomsk, Russia, (2) University College London, London, UK (faz@iao

    NASA Astrophysics Data System (ADS)

    Fazliev, A.

    2009-04-01

    system this module also deals with configuration files of software core and its database. Such organization of work provides closer integration with software core and deeper and more adequate connection in operating system support. 4 CONCLUSION In this work the problems of semantic metadata creation in information system oriented on information representation in the area of molecular spectroscopy have been discussed. The described method of semantic metadata and functions formation as well as realization and structure of META+ module have been described. Architecture of META+ module is closely related to the existing software of "Molecular spectroscopy" scientific information system. Realization of the module is performed with the use of modern approaches to Web-oriented applications development. It uses the existing applied interfaces. The developed software allows us to: - perform automatic metadata annotation of calculated tasks solutions directly in the information system; - perform automatic annotation of metadata on the solution of tasks on task solution results uploading outside the information system forming an instance of the solved task on the basis of entry data; - use ontological instances of task solution for identification of data in information tasks of viewing, comparison and search solved by information system; - export applied tasks ontologies for the operation with them by external means; - solve the task of semantic search according to the pattern and using question-answer type interface. 5 ACKNOWLEDGEMENT The authors are grateful to RFBR for the financial support of development of distributed information system for molecular spectroscopy. REFERENCES A.D.Bykov, A.Z. Fazliev, N.N.Filippov, A.V. Kozodoev, A.I.Privezentsev, L.N.Sinitsa, M.V.Tonkov and M.Yu.Tretyakov, Distributed information system on atmospheric spectroscopy // Geophysical Research Abstracts, SRef-ID: 1607-7962/gra/EGU2007-A-01906, 2007, v. 9, p. 01906. A.I.Prevezentsev, A.Z. Fazliev

  18. A brief on writing a successful abstract.

    PubMed

    Gambescia, Stephen F

    2013-01-01

    The abstract for an article submitted to a clinical or academic journal often gets little attention in the manuscript preparation process. The abstract serves multiple purposes in scholarly work dissemination, including the one piece of information reviewers have to invite presenters to professional conferences. Therefore, the abstract can be the most important and should be the most powerful 150-250 words written by authors of scholarly work. This brief for healthcare practitioners, junior faculty, and students provides general comments, details, nuances and tips and explains the various uses of the abstract for publications and presentations in the healthcare field.

  19. Neural correlates of abstract verb processing.

    PubMed

    Rodríguez-Ferreiro, Javier; Gennari, Silvia P; Davies, Robert; Cuetos, Fernando

    2011-01-01

    The present study investigated the neural correlates of the processing of abstract (low imageability) verbs. An extensive body of literature has investigated concrete versus abstract nouns but little is known about how abstract verbs are processed. Spanish abstract verbs including emotion verbs (e.g., amar, "to love"; molestar, "to annoy") were compared to concrete verbs (e.g., llevar, "to carry"; arrastrar, "to drag"). Results indicated that abstract verbs elicited stronger activity in regions previously associated with semantic retrieval such as inferior frontal, anterior temporal, and posterior temporal regions, and that concrete and abstract activation networks (compared to that of pseudoverbs) were partially distinct, with concrete verbs eliciting more posterior activity in these regions. In contrast to previous studies investigating nouns, verbs strongly engage both left and right inferior frontal gyri, suggesting, as previously found, that right prefrontal cortex aids difficult semantic retrieval. Together with previous evidence demonstrating nonverbal conceptual roles for the active regions as well as experiential content for abstract word meanings, our results suggest that abstract verbs impose greater demands on semantic retrieval or property integration, and are less consistent with the view that abstract words recruit left-lateralized regions because they activate verbal codes or context, as claimed by proponents of the dual-code theory. Moreover, our results are consistent with distributed accounts of semantic memory because distributed networks may coexist with varying retrieval demands.

  20. Neural correlates of abstract verb processing.

    PubMed

    Rodríguez-Ferreiro, Javier; Gennari, Silvia P; Davies, Robert; Cuetos, Fernando

    2011-01-01

    The present study investigated the neural correlates of the processing of abstract (low imageability) verbs. An extensive body of literature has investigated concrete versus abstract nouns but little is known about how abstract verbs are processed. Spanish abstract verbs including emotion verbs (e.g., amar, "to love"; molestar, "to annoy") were compared to concrete verbs (e.g., llevar, "to carry"; arrastrar, "to drag"). Results indicated that abstract verbs elicited stronger activity in regions previously associated with semantic retrieval such as inferior frontal, anterior temporal, and posterior temporal regions, and that concrete and abstract activation networks (compared to that of pseudoverbs) were partially distinct, with concrete verbs eliciting more posterior activity in these regions. In contrast to previous studies investigating nouns, verbs strongly engage both left and right inferior frontal gyri, suggesting, as previously found, that right prefrontal cortex aids difficult semantic retrieval. Together with previous evidence demonstrating nonverbal conceptual roles for the active regions as well as experiential content for abstract word meanings, our results suggest that abstract verbs impose greater demands on semantic retrieval or property integration, and are less consistent with the view that abstract words recruit left-lateralized regions because they activate verbal codes or context, as claimed by proponents of the dual-code theory. Moreover, our results are consistent with distributed accounts of semantic memory because distributed networks may coexist with varying retrieval demands. PMID:20044889