Science.gov

Sample records for replicated metadata catalogue

  1. Coordinated Earth Science Data Replication Enabled Through ISO Metadata, Version Control, and Web Services

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Gollberg, G.; Sheneman, L.; Dascalu, S.

    2011-12-01

    The richness and flexibility of the ISO 19115 metadata standard for documenting Earth Science data has the potential to provide support for numeroius applications beyond the traditional discovery and use scenarios commonly associated with metadata. The Tri-State (Nevada, New Mexico, Idaho) NSF EPSCoR project is pursuing such an alternative application of the ISO Metadata content model - one in which targeted data replication between individual data repositories in the three states is enabled through a specifically defined collection and granule metadata content model. The developed metadata model includes specific ISO 19115 elements that enable: - "flagging" of specific collections or granules for replication - documenting lineage (the relationship between "authoritative" source data and data replicas) - verification of data fidelity through standard cryptographic methods - extension of collection and granual metadata to reflect additonal data download and services provided by distributed data replicas While the mechanics of the replication model within each state are dependent upon the specific systems, software, and storage capabilities within the individual repositories, the adoption of a common XML metadata model (ISO 19139) and the use of a broadly supported version control system (Subversion) as the core storage system for the shared metadata provides a long-term platform upon which each state in the consortium can build. This paper presents the preliminary results of the implementation of the system across all three states, and will include a discussion of the specific ISO 19115 elements that contribute to the system, experience in using Subversion as a metadata versioning system, and lessons learned in the development of this loosely-coupled data replication system.

  2. The eGenVar data management system--cataloguing and sharing sensitive data and metadata for the life sciences.

    PubMed

    Razick, Sabry; Močnik, Rok; Thomas, Laurent F; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål

    2014-01-01

    Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/.

  3. The eGenVar data management system—cataloguing and sharing sensitive data and metadata for the life sciences

    PubMed Central

    Razick, Sabry; Močnik, Rok; Thomas, Laurent F.; Ryeng, Einar; Drabløs, Finn; Sætrom, Pål

    2014-01-01

    Systematic data management and controlled data sharing aim at increasing reproducibility, reducing redundancy in work, and providing a way to efficiently locate complementing or contradicting information. One method of achieving this is collecting data in a central repository or in a location that is part of a federated system and providing interfaces to the data. However, certain data, such as data from biobanks or clinical studies, may, for legal and privacy reasons, often not be stored in public repositories. Instead, we describe a metadata cataloguing system and a software suite for reporting the presence of data from the life sciences domain. The system stores three types of metadata: file information, file provenance and data lineage, and content descriptions. Our software suite includes both graphical and command line interfaces that allow users to report and tag files with these different metadata types. Importantly, the files remain in their original locations with their existing access-control mechanisms in place, while our system provides descriptions of their contents and relationships. Our system and software suite thereby provide a common framework for cataloguing and sharing both public and private data. Database URL: http://bigr.medisin.ntnu.no/data/eGenVar/ PMID:24682735

  4. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    André, François; Brissebrat, Guillaume; Fleury, Laurence; Gaillardet, Jérôme; Nord, Guillaume

    2014-05-01

    RBV (Réseau des Bassins Versants) is an initiative to consolidate the national efforts made by more than 15 elementary observatories belonging to various French research institutions (CNRS, Universities, INRA, IRSTEA, IRD) that study river and drainage basins. RBV is a part of a global initiative to create a network of observatories for investigating Earth's surface processes. The RBV Metadata Catalogue aims to give an unified vision of the work produced by every observatory to both the members of the RBV network and any external person involved in this domain of research. Another goal is to share this information with other catalogues through the compliance with the ISO19115 standard and the INSPIRE directive and the ability of being harvested (globally or partially). Metadata management is heterogeneous among observatories. The catalogue is designed to face this situation with the following main features: -Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. -Three hierachical levels: Metadata records may describe either an observatory in general, one of its experimental site or a dataset produced by instruments. -Multilingualism: Metadata can be entered in several configurable languages. The catalogue provides many other feature such as search and browse mechanisms to find or discover records. The RBV metadata catalogue associates a CSW metadata server (Geosource) and a JEE application. The CSW server is in charge of the persistence of the metadata while the JEE application both wraps CSW calls and define the user interface. The latter is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. The catalogue is accessible at the following address: http://portailrbv.sedoo.fr/ Next steps will target the following points: -Description of sensors in accordance

  5. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  6. Digital Initiatives and Metadata Use in Thailand

    ERIC Educational Resources Information Center

    SuKantarat, Wichada

    2008-01-01

    Purpose: This paper aims to provide information about various digital initiatives in libraries in Thailand and especially use of Dublin Core metadata in cataloguing digitized objects in academic and government digital databases. Design/methodology/approach: The author began researching metadata use in Thailand in 2003 and 2004 while on sabbatical…

  7. Developing the CUAHSI Metadata Profile

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Bermudez, L.; Islam, S.; Beran, B.

    2004-12-01

    The Hydrologic Information System (HIS), of the Consortium of Universities for the Advancement of Hydrologic Science Inc., (CUAHSI), has as one of its goals to improve access to large volume, high quality, and heterogeneous hydrologic data sets. This will be attained in part by adopting a community metadata profile to achieve consistent descriptions that will facilitate data discovery. However, common standards are quite general in nature and typically lack domain specific vocabularies, complicating the adoption of standards for specific communities. We will show and demonstrate the problems encountered in the process of adopting ISO standards to create a CUAHSI metadata profile. The final schema is expressed in a simple metadata format, Metadata Template File (MTF), to leverage metadata annotations/viewer tools already developed by the San Diego Super Computer Center. The steps performed to create an MTF starting from ISO 19115:2003 are the following: 1) creation of ontologies using the Web Ontology Language (OWL) for ISO:19115 2003 and related ISO/TC 211 documents; 2) conceptualization in OWL of related hydrologic vocabularies such as NASA's Global Change Master Directory and units from the Hydrologic Handbook; 3) definition of CUAHSI profile by importing and extending the previous ontologies; 4) explicit creation of CUAHSI core set 5) export of the core set to MTF); 6) definition of metadata blocks for arbitrary digital objects (e.g. time series vs static-spatial data) using ISO's methodology for feature cataloguing; and 7) export of metadata blocks to MTF.

  8. An implementation of geospatial semantic catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Xu; Zhu, Xinyan; Du, Daosheng; Liu, Tingting

    2008-12-01

    Along with the development of earth observation technology, large amounts of geospatial information are accessible. There are also a lot of geospatial data and services which are shared on the Internet. However they vary in formats and are stored at various organizations leading to problems of data discovery, data interoperability and usability. The Open Geospatial Consortium (OGC) has developed standard service called catalogue in order to overcome this problem. The goal of a geospatial catalogue is to support a wide range of users in discovering relevant geographic data and services from heterogeneous and distributed repositories. But in most of geospatial catalogue services, the search functionality is limited to the direct match of keywords from metadata, the OGC catalogues may not return useful results as the used keywords often do not match with the meta-information stored in the catalogues. In this paper, we propose a geospatial semantic catalogue services that aims at overcoming this limitation.

  9. Networking environmental metadata: a pilot project for the Mediterranean Region

    NASA Astrophysics Data System (ADS)

    Bonora, N.; Benito, M.; Abou El-Magd, I.; Mazzetti, P.; Ndong, C.

    2012-04-01

    To better exploit any environmental dataset it is necessary to provide detailed information (metadata) capable to furnish the best data description. Operating environmental data and information networking requires the long-term investment of financial and human resources. As these resources are scarce, ensuring sustainability can be a struggle. Then, to use more effectively human and economic resources and to avoid duplication, it is essential to test existing models and, where appropriate, replicate strategies and experiences. For the above reasons, it has been programmed to pilot a project to implement and test a metadata catalogue's networking, involving Countries afferent the Mediterranean Region, to demonstrate that the adoption of open source and free software and international interoperability standards can contribute to the alignment of I&TC resources to achieve environmental information sharing. This pilot, planned in the frame of the EGIDA FP7 European Project, aims to support the implementation of a replication methodology for the establishment of national/regional environmental information nodes on the bases of the System of Systems architecture concept, to support the exchange of environmental information in the frame of the Barcelona Convention and to incept a Mediterranean scale joint contribution to GEOSS focusing on partnership, infrastructures and products. To establish the partnership and to conduce interoperability tests, this pilot project build on the Info-RAC (Information and Communication Activity Centre of the United Nation Environmental Programme - Mediterranean Action Plan) and GEO (Group on Earth Observations) networks.

  10. Metadata Leadership

    ERIC Educational Resources Information Center

    Tennant, Roy

    2004-01-01

    Libraries must increasingly accommodate bibliographic records encoded with a variety of standards and emerging standards, including Dublin Core, MODS, and VRA Core. The problem is that many libraries still rely solely on MARC and AACR2. The best-trained professionals to lead librarians through the metadata maze are catalogers. Catalogers…

  11. ESO Catalogue Facility Design and Performance

    NASA Astrophysics Data System (ADS)

    Moins, C.; Retzlaff, J.; Arnaboldi, M.; Zampieri, S.; Delmotte, N.; Forchí, V.; Klein Gebbinck, M.; Lockhart, J.; Micol, A.; Vera Sequeiros, I.; Bierwirth, T.; Peron, M.; Romaniello, M.; Suchar, D.

    2013-10-01

    The ESO Phase 3 Catalogue Facility provides investigators with the possibility to ingest catalogues resulting from ESO public surveys and large programs and to query and download their content according to positional and non-positional criteria. It relies on a chain of tools that covers the complete workflow from submission to validation and ingestion into the ESO archive and catalogue repository and a web application to browse and query catalogues. This repository consists of two components. One is a Sybase ASE relational database where catalogue meta-data are stored. The second one is a Sybase IQ data warehouse where the content of each catalogue is ingested in a specific table that returns all records matching a user's query. Spatial indexing has been implemented in Sybase IQ to speed up positional queries and relies on the Spherical Geometry Toolkit from the Johns Hopkins University which implements the Hierarchical Triangular Mesh (HTM) algorithm. It is based on a recursive decomposition of the celestial sphere in spherical triangles and the assignment of an index to each of them. It has been complemented with the use of optimized indexes on the non-positional columns that are likely to be frequently used as query constraints. First tests performed on catalogues such as 2MASS have confirmed that this approach provides a very good level of performance and a smooth user experience that are likely to facilitate the scientific exploitation of catalogues.

  12. Catalogues of planetary nebulae.

    NASA Astrophysics Data System (ADS)

    Acker, A.

    Firstly, the general requirements concerning catalogues are studied for planetary nebulae, in particular concerning the objects to be included in a catalogue of PN, their denominations, followed by reflexions about the afterlife and comuterized versions of a catalogue. Then, the basic elements constituting a catalogue of PN are analyzed, and the available data are looked at each time.

  13. Mining the Metadata Quarries.

    ERIC Educational Resources Information Center

    Sutton, Stuart A., Ed.; Guenther, Rebecca; McCallum, Sally; Greenberg, Jane; Tennis, Joseph T.; Jun, Wang

    2003-01-01

    This special section of the "Bulletin" includes an introduction and the following articles: "New Metadata Standards for Digital Resources: MODS (Metadata Object and Description Schema) and METS (Metadata Encoding and Transmission Standard)"; "Metadata Generation: Processes, People and Tools"; "Data Collection for Controlled Vocabulary…

  14. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    ERIC Educational Resources Information Center

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  15. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  16. Users and Union Catalogues

    ERIC Educational Resources Information Center

    Hartley, R. J.; Booth, Helen

    2006-01-01

    Union catalogues have had an important place in libraries for many years. Their use has been little investigated. Recent interest in the relative merits of physical and virtual union catalogues and a recent collaborative project between a physical and several virtual union catalogues in the United Kingdom led to the opportunity to study how users…

  17. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/

  18. Map Metadata: Essential Elements for Search and Storage

    ERIC Educational Resources Information Center

    Beamer, Ashley

    2009-01-01

    Purpose: The purpose of this paper is to develop an understanding of the issues surrounding the cataloguing of maps in archives and libraries. An investigation into appropriate metadata formats, such as MARC21, EAD and Dublin Core with RDF, shows how particular map data can be stored. Mathematical map elements, specifically co-ordinates, are…

  19. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and

  20. A Pan-European and Cross-Discipline Metadata Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes; Lautenschlager, Michael

    2014-05-01

    In recent years, significant investments have been made to create a pan-European e-infrastructure supporting multiple and diverse research communities. This led to the establishment of the community-driven European Data Infrastructure (EUDAT) project that implements services to tackle the specific challenges of international and interdisciplinary research data management. The EUDAT metadata service B2FIND plays a central role in this context as a repository and a search portal for the diverse metadata collected from heterogeneous sources. For this we built up a comprehensive joint metadata catalogue and an open data portal and offer support for new communities interested in publishing their data within EUDAT. The implemented metadata ingestion workflow consists in three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries. The semantic mapping of the non-uniform community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. Finally the mapped records are uploaded as datasets to the catalogue and displayed in the portal. The homogenisation of the different community specific data models and vocabularies enables not only the unique presentation of these datasets as tables of field-value pairs but also the faceted, spatial and temporal search in the B2FIND metadata portal. Furthermore the service provides transparent access to the scientific data objects through the given references in the metadata. We present here the functionality and the features of the B2FIND service and give an outlook of further developments.

  1. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  2. A standard for measuring metadata quality in spectral libraries

    NASA Astrophysics Data System (ADS)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480

  3. Metadata for Web Resources: How Metadata Works on the Web.

    ERIC Educational Resources Information Center

    Dillon, Martin

    This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…

  4. Academic Libraries and the Semantic Web: What the Future May Hold for Research-Supporting Library Catalogues

    ERIC Educational Resources Information Center

    Campbell, D. Grant; Fast, Karl V.

    2004-01-01

    This paper examines how future metadata capabilities could enable academic libraries to exploit information on the emerging Semantic Web in their library catalogues. Whereas current metadata architectures treat the Web as a simple means of interchanging bibliographic data that have been created by libraries, this paper suggests that academic…

  5. Metadata Activities in Biology

    SciTech Connect

    Inigo, Gil San; HUTCHISON, VIVIAN; Frame, Mike; Palanisamy, Giri

    2010-01-01

    The National Biological Information Infrastructure program has advanced the biological sciences ability to standardize, share, integrate and synthesize data by making the metadata program a core of its activities. Through strategic partnerships, a series of crosswalks for the main biological metadata specifications have enabled data providers and international clearinghouses to aggregate and disseminate tens of thousands of metadata sets describing petabytes of data records. New efforts at the National Biological Information Infrastructure are focusing on better metadata creation and curation tools, semantic mediation for data discovery and other curious initiatives.

  6. USGIN ISO metadata profile

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata

  7. Metadata management staging system

    SciTech Connect

    2013-08-01

    Django application providing a user-interface for building a file and metadata management system. An evolution of our Node.js and CouchDb metadata management system. This one focuses on server functionality and uses a well-documented, rational and REST-ful API for data access.

  8. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  9. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures

  10. No More Metadata!

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2014-05-01

    For well-known technologically motivated reasons, communities have developed the distinction between data and metadata. Mainly this was because data were too big to analyze, and often too complex as well. Therefore, metadata were established as a kind of summaries which allow browsing and search, albeit only on the criteria preselected by the metadata provider. The result is that metadata are considered smart, queryable, and agile whereas the underlying data typically are seen as big, difficult to understand and interpret, unavailable for analysis. Common sense has it that in general data should be touched upon only once a meaningful focusing and downsizing of the topical dataset has been achieved through elaborate metadata retrieval. With the advent of Big Data technology we are in a position ot overcome this age-old digital divide. Utilizing NewSQL concepts, query techniques go beyond the classical set paradigm and can also handle large graphs and arrays. Access and retrieval can be accomplished on a high semantic level. In our presentation we show, on the example of array data, how the data/metadata divide can be effectively eliminated today. We will do so by showing queries combining metadata and ground-truth data retrieval will be shown for SQL and XQuery.

  11. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  12. ATLAS Metadata Task Force

    SciTech Connect

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  13. Metadata, PICS and Quality.

    ERIC Educational Resources Information Center

    Armstrong, C. J.

    1997-01-01

    Discusses PICS (Platform for Internet Content Selection), the Centre for Information Quality Management (CIQM), and metadata. Highlights include filtering networked information; the quality of information; and standardizing search engines. (LRW)

  14. Discovering Physical Samples Through Identifiers, Metadata, and Brokering

    NASA Astrophysics Data System (ADS)

    Arctur, D. K.; Hills, D. J.; Jenkyns, R.

    2015-12-01

    Physical samples, particularly in the geosciences, are key to understanding the Earth system, its history, and its evolution. Our record of the Earth as captured by physical samples is difficult to explain and mine for understanding, due to incomplete, disconnected, and evolving metadata content. This is further complicated by differing ways of classifying, cataloguing, publishing, and searching the metadata, especially when specimens do not fit neatly into a single domain—for example, fossils cross disciplinary boundaries (mineral and biological). Sometimes even the fundamental classification systems evolve, such as the geological time scale, triggering daunting processes to update existing specimen databases. Increasingly, we need to consider ways of leveraging permanent, unique identifiers, as well as advancements in metadata publishing that link digital records with physical samples in a robust, adaptive way. An NSF EarthCube Research Coordination Network (RCN) called the Internet of Samples (iSamples) is now working to bridge the metadata schemas for biological and geological domains. We are leveraging the International Geo Sample Number (IGSN) that provides a versatile system of registering physical samples, and working to harmonize this with the DataCite schema for Digital Object Identifiers (DOI). A brokering approach for linking disparate catalogues and classification systems could help scale discovery and access to the many large collections now being managed (sometimes millions of specimens per collection). This presentation is about our community building efforts, research directions, and insights to date.

  15. Catalogue of Icelandic Volcanoes

    NASA Astrophysics Data System (ADS)

    Ilyinskaya, Evgenia; Larsen, Gudrun; Gudmundsson, Magnus T.; Vogfjord, Kristin; Pagneux, Emmanuel; Oddsson, Bjorn; Barsotti, Sara; Karlsdottir, Sigrun

    2016-04-01

    The Catalogue of Icelandic Volcanoes is a newly developed open-access web resource in English intended to serve as an official source of information about active volcanoes in Iceland and their characteristics. The Catalogue forms a part of an integrated volcanic risk assessment project in Iceland GOSVÁ (commenced in 2012), as well as being part of the effort of FUTUREVOLC (2012-2016) on establishing an Icelandic volcano supersite. Volcanic activity in Iceland occurs on volcanic systems that usually comprise a central volcano and fissure swarm. Over 30 systems have been active during the Holocene (the time since the end of the last glaciation - approximately the last 11,500 years). In the last 50 years, over 20 eruptions have occurred in Iceland displaying very varied activity in terms of eruption styles, eruptive environments, eruptive products and the distribution lava and tephra. Although basaltic eruptions are most common, the majority of eruptions are explosive, not the least due to magma-water interaction in ice-covered volcanoes. Extensive research has taken place on Icelandic volcanism, and the results reported in numerous scientific papers and other publications. In 2010, the International Civil Aviation Organisation (ICAO) funded a 3 year project to collate the current state of knowledge and create a comprehensive catalogue readily available to decision makers, stakeholders and the general public. The work on the Catalogue began in 2011, and was then further supported by the Icelandic government and the EU through the FP7 project FUTUREVOLC. The Catalogue of Icelandic Volcanoes is a collaboration of the Icelandic Meteorological Office (the state volcano observatory), the Institute of Earth Sciences at the University of Iceland, and the Civil Protection Department of the National Commissioner of the Iceland Police, with contributions from a large number of specialists in Iceland and elsewhere. The Catalogue is built up of chapters with texts and various

  16. Mercury Metadata Toolset

    SciTech Connect

    2009-09-08

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  17. Mercury Metadata Toolset

    2009-09-08

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additionalmore » metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.« less

  18. Marine Science Film Catalogue.

    ERIC Educational Resources Information Center

    Chapman, Frank L.

    Forty-eight motion picture films and filmstrips in the field of marine science are catalogued in this booklet. Following the alphabetical index, one page is devoted to each film indicating its type, producer, recommended grade level, running time, and presence of color and/or sound. A summary of film content, possible uses, and outstanding…

  19. Revealing Library Collections: NSLA Re-Imagining Libraries Project 8--Description and Cataloguing

    ERIC Educational Resources Information Center

    Gatenby, Pam

    2010-01-01

    One of the most exciting developments to occur within the library profession in recent years has been the re-evaluation of library's role in resource discovery in the web environment. This has involved recognition of short-comings of online library catalogues, a re-affirmation of the importance of metadata and a new focus on the wealth of…

  20. Standard Asteroid Photometric Catalogue

    NASA Astrophysics Data System (ADS)

    Piironen, J.; Lagerkvist, C.-I.; Torppa, J.; Kaasalainen, M.; Warner, B.

    2001-12-01

    The Asteroid Photometric Catalogue (APC) is now in its fifth update with over 8600 lightcurves of more than 1000 asteroids in the database. The APC also has references of over one thousand lightcurves not in digital format. The catalogue has been published by Uppsala University Observatory and is distributed by request (contact: classe@astro.uu.se). The new update also includes a list of known asteroid rotational periods and a CD-ROM containing all the existing digital data in the APC. The total number of observed lightcurves is growing rapidly, not the least because of the new state-of-the-art equipment and growing interest among amateur astronomers. The photometric database is now so large that the present format must be altered to facilitate a user-friendly on-line service for the down- and uploading of data. We are proposing (and have started to construct) a new Internet-based Standard Asteroid Photometric Catalogue (SAPC). The website is planned to open during the first half of the year 2002. In addition to the data files, the site would contain the index and guide to the catalogue, a web-form for reporting observations, and some general observing guidelines (e.g., on filters, timing, etc.). There would also be a list of asteroids for which more observations are needed, together with recommended observing periods. This would be accompanied by an up-to-date collection of physical asteroid models based on photometric data, as well as links to observer network pages and other sites that work in collaboration with the catalogue project. Our aim is to develop this site into a global standard service used by everyone involved in asteroid photometry.

  1. A Metadata Action Language

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Clancy, Dan (Technical Monitor)

    2001-01-01

    The data management problem comprises data processing and data tracking. Data processing is the creation of new data based on existing data sources. Data tracking consists of storing metadata descriptions of available data. This paper addresses the data management problem by casting it as an AI planning problem. Actions are data-processing commands, plans are dataflow programs and goals are metadata descriptions of desired data products. Data manipulation is simply plan generation and execution, and a key component of data tracking is inferring the effects of an observed plan. We introduce a new action language for data management domains, called ADILM. We discuss the connection between data processing and information integration and show how a language for the latter must be modified to support the former. The paper also discusses information gathering within a data-processing framework, and show how ADILM metadata expressions are a generalization of Local Completeness.

  2. Simplified Metadata Curation via the Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  3. Localisation Standards and Metadata

    NASA Astrophysics Data System (ADS)

    Anastasiou, Dimitra; Vázquez, Lucia Morado

    In this paper we describe a localisation process and focus on localisation standards. Localisation standards provide a common framework for localisers, including authors, translators, engineers, and publishers. Standards with rich semantic metadata generally facilitate, accelerate, and improve the localisation process. We focus particularly on the XML Localisation Interchange File Format (XLIFF), and present our experiment and results. An html file after converted into XLIFF, travels through different commercial localisation tools, and as a result, data as well as metadata are stripped away. Interoperability between file formats and application is a key issue for localisation and thus we stress how this can be achieved.

  4. Catalogue of Icelandic volcanoes

    NASA Astrophysics Data System (ADS)

    Ilyinskaya, Evgenia; Larsen, Gudrun; Vogfjörd, Kristin; Tumi Gudmundsson, Magnus; Jonsson, Trausti; Oddsson, Björn; Reynisson, Vidir; Barsotti, Sara; Karlsdottir, Sigrun

    2015-04-01

    Volcanic activity in Iceland occurs on volcanic systems that usually comprise a central volcano and fissure swarm. Over 30 systems have been active during the Holocene. In the last 100 years, over 30 eruptions have occurred displaying very varied activity in terms of eruption styles, eruptive environments, eruptive products and their distribution. Although basaltic eruptions are most common, the majority of eruptions are explosive, not the least due to magma-water interaction in ice-covered volcanoes. Extensive research has taken place on Icelandic volcanism, and the results reported in scientific papers and other publications. In 2010, the International Civil Aviation Organisation funded a 3 year project to collate the current state of knowledge and create a comprehensive catalogue readily available to decision makers, stakeholders and the general public. The work on the Catalogue began in 2011, and was then further supported by the Icelandic government and the EU. The Catalogue forms a part of an integrated volcanic risk assessment project in Iceland (commenced in 2012), and the EU FP7 project FUTUREVOLC (2012-2016), establishing an Icelandic volcano Supersite. The Catalogue is a collaborative effort between the Icelandic Meteorological Office (the state volcano observatory), the Institute of Earth Sciences at the University of Iceland, and the Icelandic Civil Protection, with contributions from a large number of specialists in Iceland and elsewhere. The catalogue is scheduled for opening in the first half of 2015 and once completed, it will be an official publication intended to serve as an accurate and up to date source of information about active volcanoes in Iceland and their characteristics. The Catalogue is an open web resource in English and is composed of individual chapters on each of the volcanic systems. The chapters include information on the geology and structure of the volcano; the eruption history, pattern and products; the known precursory signals

  5. The Year of Metadata.

    ERIC Educational Resources Information Center

    Wason, Tom; Griffin, Steve

    1997-01-01

    Users of the World Wide Web have recognized the need for better search strategies and mechanisms. This article discusses the Educom NLII Instructional Management System (IMS) Project specifications and Java-based software for metadata that will provide information about Web-based materials not currently obtainable with traditional search engines.…

  6. Metadata in Scientific Dialects

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2011-12-01

    Discussions of standards in the scientific community have been compared to religious wars for many years. The only things scientists agree on in these battles are either "standards are not useful" or "everyone can benefit from using my standard". Instead of achieving the goal of facilitating interoperable communities, in many cases the standards have served to build yet another barrier between communities. Some important progress towards diminishing these obstacles has been made in the data layer with the merger of the NetCDF and HDF scientific data formats. The universal adoption of XML as the standard for representing metadata and the recent adoption of ISO metadata standards by many groups around the world suggests that similar convergence is underway in the metadata layer. At the same time, scientists and tools will likely need support for native tongues for some time. I will describe an approach that combines re-usable metadata "components" and restful web services that provide those components in many dialects. This approach uses advanced XML concepts of referencing and linking to construct complete records that include reusable components and builds on the ISO Standards as the "unabridged dictionary" that encompasses the content of many other dialects.

  7. Merged infrared catalogue

    NASA Technical Reports Server (NTRS)

    Schmitz, M.; Brown, L. W.; Mead, J. M.; Nagy, T. A.

    1978-01-01

    A compilation of equatorial coordinates, spectral types, magnitudes, and fluxes from five catalogues of infrared observations is presented. This first edition of the Merged Infrared Catalogue contains 11,201 oservations from the Two-Micron Sky Survey, Observations of Infrared Radiation from Cool Stars, the Air Force Geophysics Laboratory four Color Infrared Sky Survey and its Supplemental Catalog, and from Catalog of 10 micron Celestial Objects (HALL). This compilation is a by-product of a computerized infrared data base under development at Goddard Space Flight Center; the objective is to maintain a complete and current record of all infrared observations from 1 micron m to 1000 micron m of nonsolar system objects. These observations are being placed into a standardized system.

  8. Technology Catalogue. First edition

    SciTech Connect

    Not Available

    1994-02-01

    The Department of Energy`s Office of Environmental Restoration and Waste Management (EM) is responsible for remediating its contaminated sites and managing its waste inventory in a safe and efficient manner. EM`s Office of Technology Development (OTD) supports applied research and demonstration efforts to develop and transfer innovative, cost-effective technologies to its site clean-up and waste management programs within EM`s Office of Environmental Restoration and Office of Waste Management. The purpose of the Technology Catalogue is to provide performance data on OTD-developed technologies to scientists and engineers assessing and recommending technical solutions within the Department`s clean-up and waste management programs, as well as to industry, other federal and state agencies, and the academic community. OTD`s applied research and demonstration activities are conducted in programs referred to as Integrated Demonstrations (IDs) and Integrated Programs (IPs). The IDs test and evaluate.systems, consisting of coupled technologies, at specific sites to address generic problems, such as the sensing, treatment, and disposal of buried waste containers. The IPs support applied research activities in specific applications areas, such as in situ remediation, efficient separations processes, and site characterization. The Technology Catalogue is a means for communicating the status. of the development of these innovative technologies. The FY93 Technology Catalogue features technologies successfully demonstrated in the field through IDs and sufficiently mature to be used in the near-term. Technologies from the following IDs are featured in the FY93 Technology Catalogue: Buried Waste ID (Idaho National Engineering Laboratory, Idaho); Mixed Waste Landfill ID (Sandia National Laboratories, New Mexico); Underground Storage Tank ID (Hanford, Washington); Volatile organic compound (VOC) Arid ID (Richland, Washington); and VOC Non-Arid ID (Savannah River Site, South Carolina).

  9. Comparative Study of Metadata Standards and Metadata Repositories

    NASA Astrophysics Data System (ADS)

    Pahuja, Gunjan

    2011-12-01

    Lot of work is being accomplished in the national and international standards communities to reach a consensus on standardizing metadata and repositories for organizing the metadata. Descriptions of several metadata standards and their importance to statistical agencies are provided in this paper. Existing repositories based on these standards help to promote interoperability between organizations, systems, and people. Repositories are vehicles for collecting, managing, comparing, reusing, and disseminating the designs, specifications, procedures, and outputs of systems, e.g. statistical surveys.

  10. Searchable solar feature catalogues

    NASA Astrophysics Data System (ADS)

    Zharkova, V. V.; Aboudarham, J.; Zharkov, S.; Ipson, S. S.; Benkhalil, A. K.; Fuller, N.

    The searchable Solar Feature Catalogues (SFCs) are developed from digitized solar images using automated pattern recognition techniques. The techniques were applied for the detection of sunspots, active regions, filaments and line-of-sight magnetic neutral lines in automatically standardized full disk solar images in Ca II K1, Ca II K3 and Ha lines taken at the Paris-Meudon Observatory and white light images and magnetograms from SOHO/MDI. The results of the automated recognition were verified with manual synoptic maps and available statistical data that revealed good detection accuracy. Based on the recognized parameters, a structured database of Solar Feature Catalogues was built on a MySQL server for every feature and published with various pre-designed search pages on the Bradford University web site http://www.cyber.brad.ac.uk/egso/SFC/. The SFCs with nine year coverage (1996-2004) is to be used for deeper investigation of the feature classification and solar activity forecast.

  11. Partnerships To Mine Unexploited Sources of Metadata.

    ERIC Educational Resources Information Center

    Reynolds, Regina Romano

    This paper discusses the metadata created for other purposes as a potential source of bibliographic data. The first section addresses collecting metadata by means of templates, including the Nordic Metadata Project's Dublin Core Metadata Template. The second section considers potential partnerships for re-purposing metadata for bibliographic use,…

  12. Cytometry metadata in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2016-04-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). CytometryML will serve as a common metadata standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). The datatypes are primarily based on the Flow Cytometry and the Digital Imaging and Communication (DICOM) standards. A small section of the code was formatted with standard HTML formatting elements (p, h1, h2, etc.). Results:1) The part of MIFlowCyt that describes the Experimental Overview including the specimen and substantial parts of several other major elements has been implemented as CytometryML XML schemas (www.cytometryml.org). 2) The feasibility of using MIFlowCyt to provide the combination of an overview, table of contents, and/or an index of a scientific paper or a report has been demonstrated. Previously, a sample electronic publication, EPUB, was created that could contain both MIFlowCyt metadata as well as the binary data. Conclusions: The use of CytometryML technology together with XHTML5 and CSS permits the metadata to be directly formatted and together with the binary data to be stored in an EPUB container. This will facilitate: formatting, data- mining, presentation, data verification, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate a publication's adherence to the MIFlowCyt standard, promote interoperability and should also result in the textual and numeric data being published using web technology without any change in composition.

  13. Federating Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Baru, C.; Lin, K.

    2009-04-01

    The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US

  14. Technology catalogue. Second edition

    SciTech Connect

    1995-04-01

    The Department of Energy`s (DOE`s) Office of Environmental Management (EM) is responsible for remediating DOE contaminated sites and managing the DOE waste inventory in a safe and efficient manner. EM`s Office of Technology Development (OTD) supports applied research and demonstration efforts to develop and transfer innovative, cost-effective technologies to its site clean-up and waste-management programs within EM. The purpose of the Technology Catalogue is to: (a) provide performance data on OTD-developed technologies to scientists and engineers responsible for preparing Remedial Investigation/Feasibility Studies (RI/FSs) and other compliance documents for the DOE`s clean-up and waste-management programs; and (b) identify partnering and commercialization opportunities with industry, other federal and state agencies, and the academic community.

  15. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    ERIC Educational Resources Information Center

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  16. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial

  17. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not

  18. Java Metadata Facility

    SciTech Connect

    Buttler, D J

    2008-03-06

    The Java Metadata Facility is introduced by Java Specification Request (JSR) 175 [1], and incorporated into the Java language specification [2] in version 1.5 of the language. The specification allows annotations on Java program elements: classes, interfaces, methods, and fields. Annotations give programmers a uniform way to add metadata to program elements that can be used by code checkers, code generators, or other compile-time or runtime components. Annotations are defined by annotation types. These are defined the same way as interfaces, but with the symbol {at} preceding the interface keyword. There are additional restrictions on defining annotation types: (1) They cannot be generic; (2) They cannot extend other annotation types or interfaces; (3) Methods cannot have any parameters; (4) Methods cannot have type parameters; (5) Methods cannot throw exceptions; and (6) The return type of methods of an annotation type must be a primitive, a String, a Class, an annotation type, or an array, where the type of the array is restricted to one of the four allowed types. See [2] for additional restrictions and syntax. The methods of an annotation type define the elements that may be used to parameterize the annotation in code. Annotation types may have default values for any of its elements. For example, an annotation that specifies a defect report could initialize an element defining the defect outcome submitted. Annotations may also have zero elements. This could be used to indicate serializability for a class (as opposed to the current Serializability interface).

  19. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  20. The New Online Metadata Editor for Generating Structured Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Shrestha, B.; Palanisamy, G.; Hook, L.; Killeffer, T.; Boden, T.; Cook, R. B.; Zolly, L.; Hutchison, V.; Frame, M. T.; Cialella, A. T.; Lazer, K.

    2014-12-01

    Nobody is better suited to "describe" data than the scientist who created it. This "description" about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [1] [2]. Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. How is OME helping Big Data Centers like ORNL DAAC? The ORNL DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers managed by the ESDIS Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and

  1. Mercury Toolset for Spatiotemporal Metadata

    NASA Technical Reports Server (NTRS)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  2. Mercury Toolset for Spatiotemporal Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  3. Metadata Objects for Linking the Environmental Sciences (MOLES)

    NASA Astrophysics Data System (ADS)

    Lawrence, B.; Cox, S.; Ventouras, S.

    2009-04-01

    MOLES is an information model that provides a framework to support interdisciplinary contextual metadata describing instruments, observation platforms, activities, calibrations and other aspects of the environment associated with observations and simulations. MOLES has been designed as a bridge between discovery metadata - the conventional stuff of catalogues - and the sort of metadata which scientists traditionally store alongside data within files (and more rarely, databases) - "header files" and the like. MOLES can also be thought of as both a metadata structure in it's own right, and a framework for describing and recording the relationships between aspects of the context described in other more metadata formats (such as SensorML and the upcoming Metafor Common Information Model). MOLES was originally conceived of during the first NERC DataGrid project, in 2002, and is now at V3 in 2009. V3 differs from previous versions in many significant ways: 1) it has been designed in ISO 19103 compliant UML, and an XML schema implementation is delivered via an automated implementation of the ISO19118/19136 model driven architecture. 2) it is designed to operate in Web2.0 environment with both an atom serialisation and an OGC Web Feature Service (WFS) friendly XML serialisation. 3) it leverages the OGC observations and measurements specification, complements a range of GML application schema (in particular GeoSciML and CSML), and supports export of a subset of information in ISO 19115/19139 compliance. A software implementation exploiting MOLES V3 is under development. This will be seeded with hundreds of enties available from the MOLES V2 service currently deployed in the STFC Centre for Environmental Data Archival.

  4. The PASTEL catalogue: 2016 version

    NASA Astrophysics Data System (ADS)

    Soubiran, Caroline; Le Campion, Jean-François; Brouillet, Nathalie; Chemin, Laurent

    2016-06-01

    The bibliographical compilation of stellar atmospheric parameters (Teff, log g, [Fe/H]) relying on high-resolution, high signal-to-noise spectroscopy started in the eighties with the so-called [Fe/H] catalogue, and was continued in 2010 with the PASTEL catalogue, which also includes determinations of Teff alone, based on various methods. Here we present an update of the PASTEL catalogue. The main journals and the CDS database have been surveyed to find relevant publications presenting new determinations of atmospheric parameters. As of February 2016, PASTEL includes 64 082 determinations of either Teff or (Teff, log g, [Fe/H]) for 31 401 stars, corresponding to 1142 bibliographical references. Some 11 197 stars have a determination of the three parameters (Teff, log g, [Fe/H]) with a high-quality spectroscopic metallicity. The PASTEL catalogue is available in electronic form at the CDS (http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=B/pastel).

  5. Catalogue of Texas spiders

    PubMed Central

    Dean, David Allen

    2016-01-01

    Abstract This catalogue lists 1,084 species of spiders (three identified to genus only) in 311 genera from 53 families currently recorded from Texas and is based on the “Bibliography of Texas Spiders” published by Bea Vogel in 1970. The online list of species can be found at http://pecanspiders.tamu.edu/spidersoftexas.htm. Many taxonomic revisions have since been published, particularly in the families Araneidae, Gnaphosidae and Leptonetidae. Many genera in other families have been revised. The Anyphaenidae, Ctenidae, Hahniidae, Nesticidae, Sicariidae and Tetragnathidae were also revised. Several families have been added and others split up. Several genera of Corinnidae were transferred to Phrurolithidae and Trachelidae. Two genera from Miturgidae were transferred to Eutichuridae. Zoridae was synonymized under Miturgidae. A single species formerly in Amaurobiidae is now in the Family Amphinectidae. Some trapdoor spiders in the family Ctenizidae have been transferred to Euctenizidae. Gertsch and Mulaik started a list of Texas spiders in 1940. In a letter from Willis J. Gertsch dated October 20, 1982, he stated “Years ago a first listing of the Texas fauna was published by me based largely on Stanley Mulaik material, but it had to be abandoned because of other tasks.” This paper is a compendium of the spiders of Texas with distribution, habitat, collecting method and other data available from revisions and collections. This includes many records and unpublished data (including data from three unpublished studies). One of these studies included 16,000 adult spiders belonging to 177 species in 29 families. All specimens in that study were measured and results are in the appendix. Hidalgo County has 340 species recorded with Brazos County at 323 and Travis County at 314 species. These reflect the amount of collecting in the area. PMID:27103878

  6. Catalogue of Texas spiders.

    PubMed

    Dean, David Allen

    2016-01-01

    This catalogue lists 1,084 species of spiders (three identified to genus only) in 311 genera from 53 families currently recorded from Texas and is based on the "Bibliography of Texas Spiders" published by Bea Vogel in 1970. The online list of species can be found at http://pecanspiders.tamu.edu/spidersoftexas.htm. Many taxonomic revisions have since been published, particularly in the families Araneidae, Gnaphosidae and Leptonetidae. Many genera in other families have been revised. The Anyphaenidae, Ctenidae, Hahniidae, Nesticidae, Sicariidae and Tetragnathidae were also revised. Several families have been added and others split up. Several genera of Corinnidae were transferred to Phrurolithidae and Trachelidae. Two genera from Miturgidae were transferred to Eutichuridae. Zoridae was synonymized under Miturgidae. A single species formerly in Amaurobiidae is now in the Family Amphinectidae. Some trapdoor spiders in the family Ctenizidae have been transferred to Euctenizidae. Gertsch and Mulaik started a list of Texas spiders in 1940. In a letter from Willis J. Gertsch dated October 20, 1982, he stated "Years ago a first listing of the Texas fauna was published by me based largely on Stanley Mulaik material, but it had to be abandoned because of other tasks." This paper is a compendium of the spiders of Texas with distribution, habitat, collecting method and other data available from revisions and collections. This includes many records and unpublished data (including data from three unpublished studies). One of these studies included 16,000 adult spiders belonging to 177 species in 29 families. All specimens in that study were measured and results are in the appendix. Hidalgo County has 340 species recorded with Brazos County at 323 and Travis County at 314 species. These reflect the amount of collecting in the area. PMID:27103878

  7. NCI's national environmental research data collection: metadata management built on standards and preparing for the semantic web

    NASA Astrophysics Data System (ADS)

    Wang, Jingbo; Bastrakova, Irina; Evans, Ben; Gohar, Kashif; Santana, Fabiana; Wyborn, Lesley

    2015-04-01

    National Computational Infrastructure (NCI) manages national environmental research data collections (10+ PB) as part of its specialized high performance data node of the Research Data Storage Infrastructure (RDSI) program. We manage 40+ data collections using NCI's Data Management Plan (DMP), which is compatible with the ISO 19100 metadata standards. We utilize ISO standards to make sure our metadata is transferable and interoperable for sharing and harvesting. The DMP is used along with metadata from the data itself, to create a hierarchy of data collection, dataset and time series catalogues that is then exposed through GeoNetwork for standard discoverability. This hierarchy catalogues are linked using a parent-child relationship. The hierarchical infrastructure of our GeoNetwork catalogues system aims to address both discoverability and in-house administrative use-cases. At NCI, we are currently improving the metadata interoperability in our catalogue by linking with standardized community vocabulary services. These emerging vocabulary services are being established to help harmonise data from different national and international scientific communities. One such vocabulary service is currently being established by the Australian National Data Services (ANDS). Data citation is another important aspect of the NCI data infrastructure, which allows tracking of data usage and infrastructure investment, encourage data sharing, and increasing trust in research that is reliant on these data collections. We incorporate the standard vocabularies into the data citation metadata so that the data citation become machine readable and semantically friendly for web-search purpose as well. By standardizing our metadata structure across our entire data corpus, we are laying the foundation to enable the application of appropriate semantic mechanisms to enhance discovery and analysis of NCI's national environmental research data information. We expect that this will further

  8. Making Metadata Better with CMR and MMT

    NASA Technical Reports Server (NTRS)

    Gilman, Jason Arthur; Shum, Dana

    2016-01-01

    Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.

  9. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    ERIC Educational Resources Information Center

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  10. Distributed Multi-interface Catalogue for Geospatial Data

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Bigagli, L.; Mazzetti, P.; Mattia, U.; Boldrini, E.

    2007-12-01

    Several geosciences communities (e.g. atmospheric science, oceanography, hydrology) have developed tailored data and metadata models and service protocol specifications for enabling online data discovery, inventory, evaluation, access and download. These specifications are conceived either profiling geospatial information standards or extending the well-accepted geosciences data models and protocols in order to capture more semantics. These artifacts have generated a set of related catalog -and inventory services- characterizing different communities, initiatives and projects. In fact, these geospatial data catalogs are discovery and access systems that use metadata as the target for query on geospatial information. The indexed and searchable metadata provide a disciplined vocabulary against which intelligent geospatial search can be performed within or among communities. There exists a clear need to conceive and achieve solutions to implement interoperability among geosciences communities, in the context of the more general geospatial information interoperability framework. Such solutions should provide search and access capabilities across catalogs, inventory lists and their registered resources. Thus, the development of catalog clearinghouse solutions is a near-term challenge in support of fully functional and useful infrastructures for spatial data (e.g. INSPIRE, GMES, NSDI, GEOSS). This implies the implementation of components for query distribution and virtual resource aggregation. These solutions must implement distributed discovery functionalities in an heterogeneous environment, requiring metadata profiles harmonization as well as protocol adaptation and mediation. We present a catalog clearinghouse solution for the interoperability of several well-known cataloguing systems (e.g. OGC CSW, THREDDS catalog and data services). The solution implements consistent resource discovery and evaluation over a dynamic federation of several well-known cataloguing and

  11. Metadata based mediator generation

    SciTech Connect

    Critchlow, T

    1998-03-01

    Mediators are a critical component of any data warehouse, particularly one utilizing partially materialized views; they transform data from its source format to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases, requires a mediator to be updated whenever an associated schema is modified. This maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. However, failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the m current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. This paper describes a metadata framework, and associated software designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting the schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  12. Observation metadata handling system at the European Southern Observatory

    NASA Astrophysics Data System (ADS)

    Dobrzycki, Adam; Brandt, Daniel; Giot, David; Lockhart, John; Rodriguez, Jesus; Rossat, Nathalie; Vuong, My Hà

    2006-06-01

    We present the design of the system for handling observations metadata at the Science Archive Facility of the European Southern Observatory using Sybase ASE, Replication Server and Sybase IQ. The system has been reengineered to enhance the browsing capabilities of Archive contents using searches on any observation parameter, for on-line updates on all parameters and for the on-the-fly introduction of those updates in files retrieved from the Archive. The systems also reduces the replication of duplicate information and simplifies database maintenance.

  13. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  14. Catalogue of HI PArameters (CHIPA)

    NASA Astrophysics Data System (ADS)

    Saponara, J.; Benaglia, P.; Koribalski, B.; Andruchow, I.

    2015-08-01

    The catalogue of HI parameters of galaxies HI (CHIPA) is the natural continuation of the compilation by M.C. Martin in 1998. CHIPA provides the most important parameters of nearby galaxies derived from observations of the neutral Hydrogen line. The catalogue contains information of 1400 galaxies across the sky and different morphological types. Parameters like the optical diameter of the galaxy, the blue magnitude, the distance, morphological type, HI extension are listed among others. Maps of the HI distribution, velocity and velocity dispersion can also be display for some cases. The main objective of this catalogue is to facilitate the bibliographic queries, through searching in a database accessible from the internet that will be available in 2015 (the website is under construction). The database was built using the open source `` mysql (SQL, Structured Query Language, management system relational database) '', while the website was built with ''HTML (Hypertext Markup Language)'' and ''PHP (Hypertext Preprocessor)''.

  15. Metadata-Centric Discovery Service

    NASA Astrophysics Data System (ADS)

    Huang, T.; Chung, N. T.; Gangl, M. E.; Armstrong, E. M.

    2011-12-01

    It is data about data. It is the information describing a picture without looking at the picture. Through the years, the Earth Science community seeks better methods to describe science artifacts to improve the quality and efficiency in information exchange. One the purposes are to provide information to the users to guide them into identifies the science artifacts of their interest. The NASA Distributed Active Archive Centers (DAACs) are the building blocks of a data centric federation, designed for processing and archiving from NASA's Earth Observation missions and their distribution as well as provision of specialized services to users. The Physical Oceanography Distributed Active Archive Center (PO.DAAC), at the Jet Propulsion Laboratory, archives and distributes science artifacts pertain to the physical state of the ocean. As part of its high-performance operational Data Management and Archive System (DMAS) is a fast data discovery RESTful web service called the Oceanographic Common Search Interface (OCSI). The web service searches and delivers metadata on all data holdings within PO.DAAC. Currently OCSI supports metadata standards such as ISO-19115, OpenSearch, GCMD, and FGDC, with new metadata standards still being added. While we continue to seek the silver bullet in metadata standard, the Earth Science community is in fact consists of various standards due to the specific needs of its users and systems. This presentation focuses on the architecture behind OCSI as a reference implementation on building a metadata-centric discovery service.

  16. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    SciTech Connect

    Devarakonda, Ranjeet; Shrestha, Biva; Palanisamy, Giri; Hook, Leslie A; Killeffer, Terri S; Boden, Thomas A; Cook, Robert B; Zolly, Lisa; Hutchison, Viv; Frame, Mike; Cialella, Alice; Lazer, Kathy

    2014-01-01

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register

  17. Italian Polar Metadata System

    NASA Astrophysics Data System (ADS)

    Longo, S.; Nativi, S.; Leone, C.; Migliorini, S.; Mazari Villanova, L.

    2012-04-01

    Italian Polar Metadata System C.Leone, S.Longo, S.Migliorini, L.Mazari Villanova, S. Nativi The Italian Antarctic Research Programme (PNRA) is a government initiative funding and coordinating scientific research activities in polar regions. PNRA manages two scientific Stations in Antarctica - Concordia (Dome C), jointly operated with the French Polar Institute "Paul Emile Victor", and Mario Zucchelli (Terra Nova Bay, Southern Victoria Land). In addition National Research Council of Italy (CNR) manages one scientific Station in the Arctic Circle (Ny-Alesund-Svalbard Islands), named Dirigibile Italia. PNRA started in 1985 with the first Italian Expedition in Antarctica. Since then each research group has collected data regarding biology and medicine, geodetic observatory, geophysics, geology, glaciology, physics and atmospheric chemistry, earth-sun relationships and astrophysics, oceanography and marine environment, chemistry contamination, law and geographic science, technology, multi and inter disciplinary researches, autonomously with different formats. In 2010 the Italian Ministry of Research assigned the scientific coordination of the Programme to CNR, which is in charge of the management and sharing of the scientific results carried out in the framework of the PNRA. Therefore, CNR is establishing a new distributed cyber(e)-infrastructure to collect, manage, publish and share polar research results. This is a service-based infrastructure building on Web technologies to implement resources (i.e. data, services and documents) discovery, access and visualization; in addition, semantic-enabled functionalities will be provided. The architecture applies the "System of Systems" principles to build incrementally on the existing systems by supplementing but not supplanting their mandates and governance arrangements. This allows to keep the existing capacities as autonomous as possible. This cyber(e)-infrastructure implements multi-disciplinary interoperability following

  18. Catalogue of Australian Cynipoidea (Hymenoptera)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A catalogue of all families, subfamilies, genera, and species of Cynipoidea present in Australia is presented here. The Australian cynipoid fauna is very poorly known, with 37 genera cited: one each for Austrocynipidae, Ibaliidae, Liopteridae, two for Cynipidae, and 32 for Figitidae. The first Austr...

  19. Catalogue of representative meteor spectra

    NASA Astrophysics Data System (ADS)

    Vojáček, V.; Borovička, J.; Koten, P.; Spurný, P.; Štork, R.

    2016-01-01

    We present a library of low-resolution meteor spectra that includes sporadic meteors, members of minor meteor showers, and major meteor showers. These meteors are in the magnitude range from +2 to ‑3, corresponding to meteoroid sizes from 1 mm to10 mm. This catalogue is available online at the CDS for those interested in video meteor spectra.

  20. Catalogue of representative meteor spectra

    NASA Astrophysics Data System (ADS)

    Vojáček, V.; Borovička, J.; Koten, P.; Spurný, P.; Štork, R.

    2016-01-01

    We present a library of low-resolution meteor spectra that includes sporadic meteors, members of minor meteor showers, and major meteor showers. These meteors are in the magnitude range from +2 to -3, corresponding to meteoroid sizes from 1 mm to10 mm. This catalogue is available online at the CDS for those interested in video meteor spectra.

  1. Astronomical Catalogues - Definition Elements and Afterlife

    NASA Astrophysics Data System (ADS)

    Jaschek, C.

    1984-09-01

    Based on a look at the different meanings of the term catalogue (or catalog), a definition is proposed. In an analysis of the main elements, a number of requirements that catalogues should satisfy are pointed out. A section is devoted to problems connected with computer-readable versions of printed catalogues.

  2. Re-using the DataCite Metadata Store as DOI registration proxy and IGSN registry

    NASA Astrophysics Data System (ADS)

    Klump, J.; Ulbricht, D.

    2012-12-01

    Currently a lot of work is done to stimulate the reuse of data. In joint efforts research institutions establish infrastructure to facilitate the publication of scientific datasets. To create a citable reference, these datasets must be tagged with persistent identifiers (DOIs) and described with metadata. As most data in the geosciences are derived from samples, it is crucial to be able to uniquely identify the samples from which a set of data were derived. Incomplete documentation of samples in publications, use of ambiguous sample names are major obstacles for synthesis studies and re-use of data. Access to samples for re-analysis and re-appraisal is limited due to the lack of a central catalogue that allows finding a sample's archiving location. The International Geo Sample Number (IGSN) [1] provides solutions to the questions of unique sample identification and discovery. Use of the IGSN in digital data systems allows building linkages between the digital representation of samples in sample registries, e.g. SESAR [2], and their related data in the literature and in web accessible digital data repositories. DataCite recently decided to publish their metadata store (DataCite MDS) and accompanying software online [3]. The DataCite software allows registration of handles, deposition of metadata in an XML format, it offers a search interface, and is able to disseminate metadata via OAI-PMH. Its, REST interface allows an easy integration into institutional data work flows. For our applications at GFZ Potsdam we modified the DataCite MDS software for reuse it in two different contexts: as the DOIDB web service for data publications and as the IGSN registry web service for the registration of geological samples. The DOIDB acts as a proxy service to the DataCite Metadata Store and uses its REST-Interface for registration of DataCite DOI and associated DOI metadata. Metadata can be deposited in the DataCite or NASA DIF schema. Both schemata can be disseminated via OAI

  3. Symmetric Active/Active Metadata Service for High Availability Parallel File Systems

    SciTech Connect

    He, X.; Ou, Li; Engelmann, Christian; Chen, Xin; Scott, Stephen L

    2009-01-01

    High availability data storage systems are critical for many applications as research and business become more data-driven. Since metadata management is essential to system availability, multiple metadata services are used to improve the availability of distributed storage systems. Past research focused on the active/standby model, where each active service has at least one redundant idle backup. However, interruption of service and even some loss of service state may occur during a fail-over depending on the used replication technique. In addition, the replication overhead for multiple metadata services can be very high. The research in this paper targets the symmetric active/active replication model, which uses multiple redundant service nodes running in virtual synchrony. In this model, service node failures do not cause a fail-over to a backup and there is no disruption of service or loss of service state. We further discuss a fast delivery protocol to reduce the latency of the needed total order broadcast. Our prototype implementation shows that metadata service high availability can be achieved with an acceptable performance trade-off using our symmetric active/active metadata service solution.

  4. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  5. Replicating nucleosomes

    PubMed Central

    Ramachandran, Srinivas; Henikoff, Steven

    2015-01-01

    Eukaryotic replication disrupts each nucleosome as the fork passes, followed by reassembly of disrupted nucleosomes and incorporation of newly synthesized histones into nucleosomes in the daughter genomes. In this review, we examine this process of replication-coupled nucleosome assembly to understand how characteristic steady-state nucleosome landscapes are attained. Recent studies have begun to elucidate mechanisms involved in histone transfer during replication and maturation of the nucleosome landscape after disruption by replication. A fuller understanding of replication-coupled nucleosome assembly will be needed to explain how epigenetic information is replicated at every cell division. PMID:26269799

  6. Metadata Evaluation and Improvement Case Studies

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Kozimor, J.; Powers, L. A.; Gordon, S.

    2015-12-01

    Tools have been developed for evaluating metadata records and collections for completeness in terms of specific recommendations or organizational goals and to provide guidance for improving compliance of metadata with those recommendations. These tools have been applied using a several metadata recommendations (OGC-CSW, DataCite, NASA Unified Metadata Model) and metadata dialects used by several organizations: Climate Data Initiative metadata from NASA DAACs in ECHO, DIF, and ISO 19115-2 US Geological Survey metadata from ScienceBase in CSDGM ACADIS Metadata from NCAR's Earth Observation Lab in ISO 19115-2. The results of this work are designed to help managers understand metadata recommendations (e.g. OGC Catalog Services for the Web, DataCite, and others) and the impact of those recommendations in terms of the dialects used in their organizations (e.g. DIF, CSDGM , ISO). They include comparisons between metadata recommendations and dialect capabilities, scoring of metadata records in terms of amount of missing content, and identification of specific improvement strategies for particular collections. This information is included in the Earth Science Information Partnership (ESIP) Wiki to encourage broad dissemination and participation.

  7. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  8. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  9. Finding Atmospheric Composition (AC) Metadata

    NASA Technical Reports Server (NTRS)

    Strub, Richard F..; Falke, Stefan; Fiakowski, Ed; Kempler, Steve; Lynnes, Chris; Goussev, Oleg

    2015-01-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not all

  10. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet

  11. Evolution of the architecture of the ATLAS Metadata Interface (AMI)

    NASA Astrophysics Data System (ADS)

    Odier, J.; Aidel, O.; Albrand, S.; Fulachier, J.; Lambert, F.

    2015-12-01

    The ATLAS Metadata Interface (AMI) is now a mature application. Over the years, the number of users and the number of provided functions has dramatically increased. It is necessary to adapt the hardware infrastructure in a seamless way so that the quality of service re - mains high. We describe the AMI evolution since its beginning being served by a single MySQL backend database server to the current state having a cluster of virtual machines at French Tier1, an Oracle database at Lyon with complementary replication to the Oracle DB at CERN and AMI back-up server.

  12. The Solar Stormwatch CME catalogue.

    NASA Astrophysics Data System (ADS)

    Barnard, Luke

    2015-04-01

    Since the launch of the twin STEREO satellites in late 2006, the Heliospheric Imagers have been used, with good results, in tracking transients of solar origin, such as Coronal Mass Ejections (CMEs), out through the inner heliosphere. A frequently used approach is to build a "J-Map", in which multiple elongation profiles along a constant position angle are stacked in time, building an image in which radially propagating transients form curved tracks in the J-Map. From this the time-elongation profile of a solar transient can be manually identified. This is a time consuming and laborious process, and the results are subjective, depending on the skill and expertise of the investigator. With the Heliospheric Imager data it is possible to follow CMEs from the outer limits of the solar corona all the way to 1AU. Solar Stormwatch is a citizen science project that employs the power of thousands of volunteers to both identify and track CMEs in the Heliospheric Imager data. The CMEs identified by Solar Stormwatch are tracked many times by multiple users and this allows the calculation of consensus time-elongation profiles for each event and also provides an estimate of the error in the consensus profile. Therefore this system does not suffer from the potential subjectivity of individual researchers identifying and tracking CMEs. In this sense, the Solar Stormwatch system can be thought of as providing a middle ground between manually identified CME catalogues, such as the CDAW list, and CME catalogues generated through fully automated algorithms, such as CACtus and ARTEMIS etc. We provide a summary of the reduction of the Solar Stormwatch data into a catalogue of CMEs observed by STEREO-A and STEREO-B through the deep minimum of solar cycle 23 and review some key statistical properties of these CMEs. Through some case studies of the propagation of CMEs out into the inner heliosphere we argue that the Solar Stormwatch CME catalogue, which publishes the time

  13. MPEG-7: standard metadata for multimedia content

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    The eXtensible Markup Language (XML) metadata technology of describing media contents has emerged as a dominant mode of making media searchable both for human and machine consumptions. To realize this premise, many online Web applications are pushing this concept to its fullest potential. However, a good metadata model does require a robust standardization effort so that the metadata content and its structure can reach its maximum usage between various applications. An effective media content description technology should also use standard metadata structures especially when dealing with various multimedia contents. A new metadata technology called MPEG-7 content description has merged from the ISO MPEG standards body with the charter of defining standard metadata to describe audiovisual content. This paper will give an overview of MPEG-7 technology and what impact it can bring forth to the next generation of multimedia indexing and retrieval applications.

  14. Solar Feature Catalogues In Egso

    NASA Astrophysics Data System (ADS)

    Zharkova, V. V.; Aboudarham, J.; Zharkov, S.; Ipson, S. S.; Benkhalil, A. K.; Fuller, N.

    2005-05-01

    The Solar Feature Catalogues (SFCs) are created from digitized solar images using automated pattern recognition techniques developed in the European Grid of Solar Observation (EGSO) project. The techniques were applied for detection of sunspots, active regions and filaments in the automatically standardized full-disk solar images in Caii K1, Caii K3 and Hα taken at the Meudon Observatory and white-light images and magnetograms from SOHO/MDI. The results of automated recognition are verified with the manual synoptic maps and available statistical data from other observatories that revealed high detection accuracy. A structured database of the Solar Feature Catalogues is built on the MySQL server for every feature from their recognized parameters and cross-referenced to the original observations. The SFCs are published on the Bradford University web site http://www.cyber.brad.ac.uk/egso/SFC/ with the pre-designed web pages for a search by time, size and location. The SFCs with 9 year coverage (1996 2004) provide any possible information that can be extracted from full disk digital solar images. Thus information can be used for deeper investigation of the feature origin and association with other features for their automated classification and solar activity forecast.

  15. Evaluating the privacy properties of telephone metadata.

    PubMed

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C

    2016-05-17

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  16. Metazen – metadata capture for metagenomes

    DOE PAGES

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  17. Metazen – metadata capture for metagenomes

    SciTech Connect

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  18. Evaluating the privacy properties of telephone metadata

    PubMed Central

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  19. Metazen – metadata capture for metagenomes

    PubMed Central

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  20. Logic programming and metadata specifications

    NASA Technical Reports Server (NTRS)

    Lopez, Antonio M., Jr.; Saacks, Marguerite E.

    1992-01-01

    Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.

  1. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1984-01-01

    This report describes a computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between 0 and 10000 GHz (i.e., wavelengths longer than 30 micrometers). The catalogue can be used as a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue has been constructed using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (151 species) as new data appear. The catalogue is available from the authors as a magnetic tape recorded in card images and as a set of microfiche records.

  2. The Herschel Point Source Catalogue

    NASA Astrophysics Data System (ADS)

    Marton, Gabor; Schulz, Bernhard; Altieri, Bruno; Calzoletti, Luca; Kiss, Csaba; Lim, Tanya; Lu, Nanyao; Paladini, Roberta; Papageorgiou, Andreas; Pearson, Chris; Rector, John; Shupe, David; Valtchanov, Ivan; Verebélyi, Erika; Xu, Kevin

    2015-08-01

    The Herschel Space Observatory was the fourth cornerstone mission in the European Space Agency (ESA) science programme with excellent broad band imaging capabilities in the submillimetre and far-infrared part of the spectrum. Although the spacecraft finished its observations in 2013, it left a large legacy dataset that is far from having been fully scrutinized and still has potential for new scientific discoveries. This is specifically true for the photometric observations of the PACS and SPIRE instruments that scanned >10% of the sky at 70, 100, 160, 250, 350 and 500 microns. Some source catalogs have already been produced by individual observing programs, but there are many observations that would never be analyzed for their full source content. To maximize the science return of the SPIRE and PACS data sets, our international team of instrument experts is in the process of building the Herschel Point Source Catalog (HPSC) from all scan map observations. Our homogeneous source extraction enables a systematic and unbiased comparison of sensitivity across the different Herschel fields that single programs will generally not be able to provide. The extracted point sources will contain individual YSOs of our Galaxy, unresolved YSO clusters in resolved nearby galaxies and unresolved galaxies of the local and distant Universe that are related to star formation. Such a huge dataset will help scientists better understand the evolution from interstellar clouds to individual stars. Furthermore the analysis of stellar clusters and the star formation on galactic scales will add more details to the understanding of star formation laws through time.We present our findings on comparison of different source detection and photometric tools. First results of the extractions are shown along with the description of our pipelines and catalogue entries. We also provide an additional science product, the structure noise map, that is used for the quality assessment of the catalogue in

  3. Integrated Array/Metadata Analytics

    NASA Astrophysics Data System (ADS)

    Misev, Dimitar; Baumann, Peter

    2015-04-01

    Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.

  4. A Dynamic Metadata Community Profile for CUAHSI

    NASA Astrophysics Data System (ADS)

    Bermudez, L.; Piasecki, M.

    2004-12-01

    Common Metadata standards typically lack of domain specific elements, have limited extensibility and do not always resolve semantic heterogeneities that could occur in the annotations. To facilitate the use and extension of metadata specifications a methodology called Dynamic Community Profiles, DCP, is presented. The methodology allows to overwrite elements definitions and to specify core elements as metadata tree paths. DCP uses the Web Ontology Language (OWL), the Resource Description Framework (RDF) and XML syntax to formalize specifications and to create controlled vocabularies in ontologies, which enhances interoperability. This methodology was employed to create a metadata profile for the Consortium of Universities for the Advancement of Hydrologic Science Inc. (CUAHSI). The profile was created by extending ISO-19115:2003 geographic metadata standard and restricting the permissible values of some elements. The values used as controlled vocabularies were inferred from hydrologic keywords found in the Global Change Master Directory (GCMD) and from measurement units found in the Hydrologic Handbook. Also, a core metadata set for CUAHSI was formally expressed as tree paths, containing the ISO core set plus additional elements. Finally a tool was developed to test the extension and to allow creation of metadata instances in RDF/XML which conforms to the profile. Also this tool is able to export the core elements to other schema formats such as Metadata Template Files (MTF).

  5. Mapping Entry Vocabulary to Unfamiliar Metadata Vocabularies.

    ERIC Educational Resources Information Center

    Buckland, Michael; Chen, Aitao; Chen, Hui-Min; Kim, Youngin; Lam, Byron; Larson, Ray; Norgard, Barbara; Purat, Jacek; Gey, Frederic

    1999-01-01

    Reports on work at the University of California, Berkeley, on the design and development of English-language indices to metadata vocabularies. Discusses the significance of unfamiliar metadata and describes the Entry Vocabulary Module which helps searchers to be more effective and increases the return on the original investment in generating…

  6. Leveraging Metadata to Create Better Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  7. A Metadata-Rich File System

    SciTech Connect

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  8. A Common Metadata System for Marine Data Portals

    NASA Astrophysics Data System (ADS)

    Wosniok, C.; Breitbach, G.; Lehfeldt, R.

    2012-04-01

    ), Web Feature Service (WFS) and Sensor Observation Service (SOS), which ensures interoperability and extensibility. In addition, metadata as crucial components for searching and finding information in large data infrastructures is provided via the Catalogue Web Service (CS-W). MDI-DE and COSYNA rely on the metadata information system for marine metadata NOKIS, which reflects a metadata profile tailored for marine data according to the specifications of German coastal authorities. In spite of this common software base, interoperability between the two data collections requires constant alignments of the diverse data processed by the two portals. While monitoring data in the MDI-DE is currently rather campaign-based, COSYNA has to fit constantly evolving time series into metadata sets. With all data following the same metadata profile, we now reach full interoperability between the different data collections. The distributed marine information system provides options to search, find and visualise the harmonised results from continuous monitoring, field campaigns, numerical modeling and other data in one web client.

  9. METADATA REGISTRY, ISO/IEC 11179

    SciTech Connect

    Pon, R K; Buttler, D J

    2008-01-03

    ISO/IEC-11179 is an international standard that documents the standardization and registration of metadata to make data understandable and shareable. This standardization and registration allows for easier locating, retrieving, and transmitting data from disparate databases. The standard defines the how metadata are conceptually modeled and how they are shared among parties, but does not define how data is physically represented as bits and bytes. The standard consists of six parts. Part 1 provides a high-level overview of the standard and defines the basic element of a metadata registry - a data element. Part 2 defines the procedures for registering classification schemes and classifying administered items in a metadata registry (MDR). Part 3 specifies the structure of an MDR. Part 4 specifies requirements and recommendations for constructing definitions for data and metadata. Part 5 defines how administered items are named and identified. Part 6 defines how administered items are registered and assigned an identifier.

  10. Collection Metadata Solutions for Digital Library Applications

    NASA Technical Reports Server (NTRS)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  11. Incorporating ISO Metadata Using HDF Product Designer

    NASA Technical Reports Server (NTRS)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  12. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1980-01-01

    A computer accessible catalogue of submillimeter, millimeter, and microwave spectral lines in the frequency range between O and 3000 GHz (such as; wavelengths longer than 100 m) is discussed. The catalogue was used as a planning guide and as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances.

  13. Metadata Standards and Workflow Systems

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2012-12-01

    All modern workflow systems include mechanisms for recording inputs, outputs and processes. These descriptions can include details required to reproduce the workflows exactly and, in some cases, can include virtual images of the hardware and operating system. There are several on-going and emerging standards for representing these detailed workflows including the Open Provenance Model (OPM) and the W3C PROV. At the same time, ISO metadata standards include a simple provenance or lineage model that includes many important elements of workflows. The ISO model could play a critical role in sharing and discovering workflow information for collections and perhaps in recording some details in granules. In order for this goal to be reached, connections between the detailed standards and ISO must be understood and conventions for using them must be developed.

  14. Mouse genetics: Catalogue and scissors

    PubMed Central

    Sung, Young Hoon; Baek, In-Jeoung; Seong, Je Kyung; Kim, Jin-Soo; Lee, Han-Woong

    2012-01-01

    Phenotypic analysis of gene-specific knockout (KO) mice has revolutionized our understanding of in vivo gene functions. As the use of mouse embryonic stem (ES) cells is inevitable for conventional gene targeting, the generation of knockout mice remains a very time-consuming and expensive process. To accelerate the large-scale production and phenotype analyses of KO mice, international efforts have organized global consortia such as the International Knockout Mouse Consortium (IKMC) and International Mouse Phenotype Consortium (IMPC), and they are persistently expanding the KO mouse catalogue that is publicly available for the researches studying specific genes of interests in vivo. However, new technologies, adopting zinc-finger nucleases (ZFNs) or Transcription Activator-Like Effector (TALE) Nucleases (TALENs) to edit the mouse genome, are now emerging as valuable and effective shortcuts alternative for the conventional gene targeting using ES cells. Here, we introduce the recent achievement of IKMC, and evaluate the significance of ZFN/TALEN technology in mouse genetics. [BMB Reports 2012; 45(12): 686-692] PMID:23261053

  15. Handling Metadata in a Neurophysiology Laboratory

    PubMed Central

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G.; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  16. Handling Metadata in a Neurophysiology Laboratory.

    PubMed

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  17. Magnitude systems in old star catalogues

    NASA Astrophysics Data System (ADS)

    Fujiwara, Tomoko; Yamaoka, Hitoshi

    2005-06-01

    The current system of stellar magnitudes originally introduced by Hipparchus was strictly defined by Norman Pogson in 1856. He based his system on Ptolemy's star catalogue, the Almagest, recorded in about AD137, and defined the magnitude-intensity relationship on a logarithmic scale. Stellar magnitudes observed with the naked eye recorded in seven old star catalogues were analyzed in order to examine the visual magnitude systems. Although psychophysicists have proposed that human visual sensitivity follows a power-law scale, it is shown here that the degree of agreement is far better for a logarithmic scale than for a power-law scale. It is also found that light ratios in each star catalogue are nearly equal to 2.512, if the brightest (1st magnitude) and the faintest (6th magnitude and dimmer) stars are excluded from the study. This means that the visual magnitudes in the old star catalogues agree fully with Pogson's logarithmic scale.

  18. FRBCAT: The Fast Radio Burst Catalogue

    NASA Astrophysics Data System (ADS)

    Petroff, E.; Barr, E. D.; Jameson, A.; Keane, E. F.; Bailes, M.; Kramer, M.; Morello, V.; Tabbara, D.; van Straten, W.

    2016-09-01

    Here, we present a catalogue of known Fast Radio Burst sources in the form of an online catalogue, FRBCAT. The catalogue includes information about the instrumentation used for the observations for each detected burst, the measured quantities from each observation, and model-dependent quantities derived from observed quantities. To aid in consistent comparisons of burst properties such as width and signal-to-noise ratios, we have re-processed all the bursts for which we have access to the raw data, with software which we make available. The originally derived properties are also listed for comparison. The catalogue is hosted online as a Mysql database which can also be downloaded in tabular or plain text format for off-line use. This database will be maintained for use by the community for studies of the Fast Radio Burst population as it grows.

  19. Submillimeter, millimeter, and microwave spectral line catalogue

    NASA Technical Reports Server (NTRS)

    Poynter, R. L.; Pickett, H. M.

    1981-01-01

    A computer accessible catalogue of submillimeter, millimeter and microwave spectral lines in the frequency range between 0 and 3000 GHZ (i.e., wavelengths longer than 100 mu m) is presented which can be used a planning guide or as an aid in the identification and analysis of observed spectral lines. The information listed for each spectral line includes the frequency and its estimated error, the intensity, lower state energy, and quantum number assignment. The catalogue was constructed by using theoretical least squares fits of published spectral lines to accepted molecular models. The associated predictions and their estimated errors are based upon the resultant fitted parameters and their covariances. Future versions of this catalogue will add more atoms and molecules and update the present listings (133 species) as new data appear. The catalogue is available as a magnetic tape recorded in card images and as a set of microfiche records.

  20. The History of Ptolemy's Star Catalogue

    NASA Astrophysics Data System (ADS)

    Graßhoff, Gerd

    Table of contents

    Contents: Introduction.- The Stars of the Almagest.- Accusations.- The Rehabilitation of Ptolemy.- The Analysis of the Star Catalogue.- Structures in Ptolemy's Star Catalogue.- Theory and Observation.- Appendix A.- Stars and Constellations.- Identifications.- Appendix B.- Transformation Formulae.- Column Headings.- Appendix C.- Column Headings.- Literature.- Index.

  1. Replicating vaccines

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Early work on fish immunology and disease resistance demonstrated fish (like animals and humans) that survived infection were typically resistant to re-infection with the same pathogen. The concepts of resistance upon reinfection lead to the research and development of replicating (live) vaccines in...

  2. eXtended MetaData Registry

    2006-10-25

    The purpose of the eXtended MetaData Registry (XMDR) prototype is to demonstrate the feasibility and utility of constructing an extended metadata registry, i.e., one which encompasses richer classification support, facilities for including terminologies, and better support for formal specification of semantics. The prototype registry will also serve as a reference implementation for the revised versions of ISO 11179, Parts 2 and 3 to help guide production implementations.

  3. Science friction: data, metadata, and collaboration.

    PubMed

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  4. What Metadata Principles Apply to Scientific Data?

    NASA Astrophysics Data System (ADS)

    Mayernik, M. S.

    2014-12-01

    Information researchers and professionals based in the library and information science fields often approach their work through developing and applying defined sets of principles. For example, for over 100 years, the evolution of library cataloging practice has largely been driven by debates (which are still ongoing) about the fundamental principles of cataloging and how those principles should manifest in rules for cataloging. Similarly, the development of archival research and practices over the past century has proceeded hand-in-hand with the emergence of principles of archival arrangement and description, such as maintaining the original order of records and documenting provenance. This project examines principles related to the creation of metadata for scientific data. The presentation will outline: 1) how understandings and implementations of metadata can range broadly depending on the institutional context, and 2) how metadata principles developed by the library and information science community might apply to metadata developments for scientific data. The development and formalization of such principles would contribute to the development of metadata practices and standards in a wide range of institutions, including data repositories, libraries, and research centers. Shared metadata principles would potentially be useful in streamlining data discovery and integration, and would also benefit the growing efforts to formalize data curation education.

  5. Hamilton Jeffers and the Double Star Catalogues

    NASA Astrophysics Data System (ADS)

    Tenn, Joseph S.

    2013-01-01

    Astronomers have long tracked double stars in efforts to find those that are gravitationally-bound binaries and then to determine their orbits. Court reporter and amateur astronomer Shelburne Wesley Burnham (1838-1921) published a massive double star catalogue containing more than 13,000 systems in 1906. The next keeper of the double stars was Lick Observatory astronomer Robert Grant Aitken (1864-1951), who produced a much larger catalogue in 1932. Aitken maintained and expanded Burnham’s records of observations on handwritten file cards, eventually turning them over to Lick Observatory astrometrist Hamilton Moore Jeffers (1893-1976). Jeffers further expanded the collection and put all the observations on punched cards. With the aid of Frances M. "Rete" Greeby (1921-2002), he made two catalogues: an Index Catalogue with basic data about each star, and a complete catalogue of observations, with one observation per punched card. He enlisted Willem van den Bos of Johannesburg to add southern stars, and they published the Index Catalogue of Visual Double Stars, 1961.0. As Jeffers approached retirement he became greatly concerned about the disposition of the catalogues. He wanted to be replaced by another "double star man," but Lick Director Albert E. Whitford (1905-2002) had the new 120-inch reflector, the world’s second largest telescope, and he wanted to pursue modern astrophysics instead. Jeffers was vociferously opposed to turning over the card files to another institution, and especially against their coming under the control of Kaj Strand of the U.S. Naval Observatory. In the end the USNO got the files and has maintained the records ever since, first under Charles Worley (1935-1997), and, since 1997, under Brian Mason. Now called the Washington Double Star Catalog (WDS), it is completely online and currently contains more than 1,000,000 measures of more than 100,000 pairs.

  6. New Generation of Catalogues for the New Generation of Users: A Comparison of Six Library Catalogues

    ERIC Educational Resources Information Center

    Mercun, Tanja; Zumer, Maja

    2008-01-01

    Purpose: The purpose of this paper is to describe some of the problems and issues faced by online library catalogues. It aims to establish how libraries have undertaken the mission of developing the next generation catalogues and how they compare to new tools such as Amazon. Design/methodology/approach: An expert study was carried out in January…

  7. A Revised Earthquake Catalogue for South Iceland

    NASA Astrophysics Data System (ADS)

    Panzera, Francesco; Zechar, J. Douglas; Vogfjörd, Kristín S.; Eberhard, David A. J.

    2016-01-01

    In 1991, a new seismic monitoring network named SIL was started in Iceland with a digital seismic system and automatic operation. The system is equipped with software that reports the automatic location and magnitude of earthquakes, usually within 1-2 min of their occurrence. Normally, automatic locations are manually checked and re-estimated with corrected phase picks, but locations are subject to random errors and systematic biases. In this article, we consider the quality of the catalogue and produce a revised catalogue for South Iceland, the area with the highest seismic risk in Iceland. We explore the effects of filtering events using some common recommendations based on network geometry and station spacing and, as an alternative, filtering based on a multivariate analysis that identifies outliers in the hypocentre error distribution. We identify and remove quarry blasts, and we re-estimate the magnitude of many events. This revised catalogue which we consider to be filtered, cleaned, and corrected should be valuable for building future seismicity models and for assessing seismic hazard and risk. We present a comparative seismicity analysis using the original and revised catalogues: we report characteristics of South Iceland seismicity in terms of b value and magnitude of completeness. Our work demonstrates the importance of carefully checking an earthquake catalogue before proceeding with seismicity analysis.

  8. A Quantitative Categorical Analysis of Metadata Elements in Image-Applicable Metadata Schemas.

    ERIC Educational Resources Information Center

    Greenberg, Jane

    2001-01-01

    Reports on a quantitative categorical analysis of metadata elements in the Dublin Core, VRA (Visual Resource Association) Core, REACH (Record Export for Art and Cultural Heritage), and EAD (Encoded Archival Description) metadata schemas, all of which can be used for organizing and describing images. Introduces a new schema comparison methodology…

  9. Streamlining geospatial metadata in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  10. Metadata aided run selection at ATLAS

    NASA Astrophysics Data System (ADS)

    Buckingham, R. M.; Gallas, E. J.; C-L Tseng, J.; Viegas, F.; Vinek, E.; ATLAS Collaboration

    2011-12-01

    Management of the large volume of data collected by any large scale scientific experiment requires the collection of coherent metadata quantities, which can be used by reconstruction or analysis programs and/or user interfaces, to pinpoint collections of data needed for specific purposes. In the ATLAS experiment at the LHC, we have collected metadata from systems storing non-event-wise data (Conditions) into a relational database. The Conditions metadata (COMA) database tables not only contain conditions known at the time of event recording, but also allow for the addition of conditions data collected as a result of later analysis of the data (such as improved measurements of beam conditions or assessments of data quality). A new web based interface called "runBrowser" makes these Conditions Metadata available as a Run based selection service. runBrowser, based on PHP and JavaScript, uses jQuery to present selection criteria and report results. It not only facilitates data selection by conditions attributes, but also gives the user information at each stage about the relationship between the conditions chosen and the remaining conditions criteria available. When a set of COMA selections are complete, runBrowser produces a human readable report as well as an XML file in a standardized ATLAS format. This XML can be saved for later use or refinement in a future runBrowser session, shared with physics/detector groups, or used as input to ELSSI (event level Metadata browser) or other ATLAS run or event processing services.

  11. Mercury-metadata data management system

    SciTech Connect

    2008-01-03

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.

  12. Mercury-metadata data management system

    2008-01-03

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source software and software developed at Oak Ridge National Laboratory. It was originally developed for NASA, USGS, and DOE. A major new version of Mercury (version 3.0) was developed during 2007 and released in early 2008. This Mercury 3.0 version provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facettedmore » type search, support for RSS delivery of search results, and ready customization to meet the needs of the multiple projects which use Mercury. For the end users, Mercury provides a single portal to very quickly search for data and information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data.« less

  13. Operational Support for Instrument Stability through ODI-PPA Metadata Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.; Kotulla, R.; Harbeck, D.; Liu, W.

    2015-09-01

    Over long time scales, quality assurance metrics taken from calibration and calibrated data products can aid observatory operations in quantifying the performance and stability of the instrument, and identify potential areas of concern or guide troubleshooting and engineering efforts. Such methods traditionally require manual SQL entries, assuming the requisite metadata has even been ingested into a database. With the ODI-PPA system, QA metadata has been harvested and indexed for all data products produced over the life of the instrument. In this paper we will describe how, utilizing the industry standard Highcharts Javascript charting package with a customized AngularJS-driven user interface, we have made the process of visualizing the long-term behavior of these QA metadata simple and easily replicated. Operators can easily craft a custom query using the powerful and flexible ODI-PPA search interface and visualize the associated metadata in a variety of ways. These customized visualizations can be bookmarked, shared, or embedded externally, and will be dynamically updated as new data products enter the system, enabling operators to monitor the long-term health of their instrument with ease.

  14. Omics Metadata Management Software v. 1 (OMMS)

    2013-09-09

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories,more » or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installed and run by operators with general system administration and scripting language literacy.« less

  15. Omics Metadata Management Software v. 1 (OMMS)

    SciTech Connect

    2013-09-09

    Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and to perform bioinformatics analyses and information management tasks via a simple and intuitive web-based interface. Several use cases with short-read sequence datasets are provided to showcase the full functionality of the OMMS, from metadata curation tasks, to bioinformatics analyses and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for web-based deployment supporting geographically dispersed research teams. Our software was developed with open-source bundles, is flexible, extensible and easily installed and run by operators with general system administration and scripting language literacy.

  16. Mining scientific data archives through metadata generation

    SciTech Connect

    Springmeyer, R.; Werner, N.; Long, J.

    1997-04-01

    Data analysis and management tools typically have not supported the documenting of data, so scientists must manually maintain all information pertaining to the context and history of their work. This metadata is critical to effective retrieval and use of the masses of archived data, yet little of it exists on-line or in an accessible format. Exploration of archived legacy data typically proceeds as a laborious process, using commands to navigate through file structures on several machines. This file-at-a-time approach needs to be replaced with a model that represents data as collections of interrelated objects. The tools that support this model must focus attention on data while hiding the complexity of the computational environment. This problem was addressed by developing a tool for exploring large amounts of data in UNIX directories via automatic generation of metadata summaries. This paper describes the model for metadata summaries of collections and the Data Miner tool for interactively traversing directories and automatically generating metadata that serves as a quick overview and index to the archived data. The summaries include thumbnail images as well as links to the data, related directories, and other metadata. Users may personalize the metadata by adding a title and abstract to the summary, which is presented as an HTML page viewed with a World Wide Web browser. We have designed summaries for 3 types of collections of data: contents of a single directory; virtual directories that represent relations between scattered files; and groups of related calculation files. By focusing on the scientists` view of the data mining task, we have developed techniques that assist in the ``detective work `` of mining without requiring knowledge of mundane details about formats and commands. Experiences in working with scientists to design these tools are recounted.

  17. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  18. Crustal Dynamics Project: Catalogue of site information

    NASA Technical Reports Server (NTRS)

    1985-01-01

    This document represents a catalogue of site information for the Crustal Dynamics Project. It contains information and descriptions of those sites used by the Project as observing stations for making the precise geodetic measurements useful for studies of the Earth's crustal movements and deformation.

  19. Metadata and data models in the WMO Information System

    NASA Astrophysics Data System (ADS)

    Tandy, Jeremy; Woolf, Andrew; Foreman, Steve; Thomas, David

    2013-04-01

    It is fifty years since the inauguration of the World Weather Watch, through which the WMO (World Meteorological Organization) has coordinated real time exchange of information between national meteorological and hydrological services. At the heart of the data exchange are standard data formats and a dedicated telecommunications system known as the GTS - the Global Telecommunications System. Weather and climate information are now more complex than in the 1960s, and increasingly the information is being used across traditional disciplines. Although the modern GTS still underpins operational weather forecasting, the WMO Information System (WIS) builds on this to make the information more widely visible and more widely accessible. The architecture of WIS is built around three tiers of information provider. National Centres are responsible for sharing information that is gathered nationally, and also for distributing information to users within their country. Many of these are national weather services, but hydrology and oceanography centres have also been designated by some countries. Data Collection or Production Centres have an international role, either collating information from several countries, or generating information that is international in nature (satellite operators are an example). Global Information System Centres have two prime responsibilities: to exchange information between regions, and to publish the global WIS Discovery Metadata Catalogue so that end users can find out what information is available through the WIS. WIS is designed to allow information to be used outside the operational weather community. This means that it has to use protocols and standards that are in general use. The WIS Discovery Metadata records, for example, are expressed using ISO 19115, and in addition to being accessible through the GISCs they are harvested by GEOSS. Weather data are mainly exchanged in formats managed by WMO, but WMO is using GML and the Open Geospatial

  20. The PASTEL catalogue of stellar parameters

    NASA Astrophysics Data System (ADS)

    Soubiran, C.; Le Campion, J.-F.; Cayrel de Strobel, G.; Caillo, A.

    2010-06-01

    Aims: The PASTEL catalogue is an update of the [Fe/H] catalogue, published in 1997 and 2001. It is a bibliographical compilation of stellar atmospheric parameters providing (T_eff, log g, [Fe/H]) determinations obtained from the analysis of high resolution, high signal-to-noise spectra, carried out with model atmospheres. PASTEL also provides determinations of the one parameter T_eff based on various methods. It is aimed in the future to provide also homogenized atmospheric parameters and elemental abundances, radial and rotational velocities. A web interface has been created to query the catalogue on elaborated criteria. PASTEL is also distributed through the CDS database and VizieR. Methods: To make it as complete as possible, the main journals have been surveyed, as well as the CDS database, to find relevant publications. The catalogue is regularly updated with new determinations found in the literature. Results: As of Febuary 2010, PASTEL includes 30151 determinations of either T_eff or (T_eff, log g, [Fe/H]) for 16 649 different stars corresponding to 865 bibliographical references. Nearly 6000 stars have a determination of the three parameters (T_eff, log g, [Fe/H]) with a high quality spectroscopic metallicity. The catalogue can be queried through a dedicated web interface at http://pastel.obs.u-bordeaux1.fr/. It is also available in electronic form at the Centre de Données Stellaires in Strasbourg (http://vizier.u-strasbg.fr/viz-bin/VizieR?-source=B/pastel), at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/515/A111

  1. A Comparative Study of the Guo Shoujing Star Catalogue and the Ulugh Beg Star Catalogue

    NASA Astrophysics Data System (ADS)

    Sun, Xiaochun; Yang, Fan; Zhao, Yongheng

    2015-08-01

    The Chinese Star Catalogue by Guo Shoujing (1231-1316) contained equatorial coordinates of 678 stars, more than doubled the number of stars in previous Chinese star catalogues. In the period 1420-1437, using astronomical instruments at Samarkand Observatory, Ulugh Beg (1394-1449) made independent observations and determined star positions of 1018 stars. An analysis of two star catalogues will show the observational techniques behind them and their accuracies. Both astronomers tried to increase accuracy of measurement by enlarging the astronomical instruments. The Chinese catalogue gives equatorial coordinates of stars. The coordinates were directly read off the armillary sphere, which was mounted equatorially mounted. Sun Xiaochun (1996) suggested that the data of the existent Guo Shoujing catalogue was actually observed around 1380, at the beginning of the Ming dynasty. The Ulugh Beg catalogue gives ecliptic coordinates of stars. Does this mean they were directly measured using an ecliptic instrument? Using Fourier analysis we discover a 3 arc minute systematic error in the declinations, which are derived from the ecliptic coordinates, suggesting the data might be first measured equatorially and then converted to ecliptic coordinates, following Ptolemaic tradition. The 3 arc minute systematic error was caused by the misalignment of the instrument's pole and celestial north pole. And the Our comparative study might throw some light on transmission of astronomical knowledge and techniques between China and Central Asia in medieval times.

  2. 32 CFR 575.6 - Catalogue, United States Military Academy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 3 2014-07-01 2014-07-01 false Catalogue, United States Military Academy. 575.6... ADMISSION TO THE UNITED STATES MILITARY ACADEMY § 575.6 Catalogue, United States Military Academy. The latest edition of the catalogue, United States Military Academy, contains additional...

  3. 32 CFR 575.6 - Catalogue, United States Military Academy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 3 2013-07-01 2013-07-01 false Catalogue, United States Military Academy. 575.6... ADMISSION TO THE UNITED STATES MILITARY ACADEMY § 575.6 Catalogue, United States Military Academy. The latest edition of the catalogue, United States Military Academy, contains additional...

  4. 32 CFR 575.6 - Catalogue, United States Military Academy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 3 2011-07-01 2009-07-01 true Catalogue, United States Military Academy. 575.6... ADMISSION TO THE UNITED STATES MILITARY ACADEMY § 575.6 Catalogue, United States Military Academy. The latest edition of the catalogue, United States Military Academy, contains additional...

  5. 32 CFR 575.6 - Catalogue, United States Military Academy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 3 2012-07-01 2009-07-01 true Catalogue, United States Military Academy. 575.6... ADMISSION TO THE UNITED STATES MILITARY ACADEMY § 575.6 Catalogue, United States Military Academy. The latest edition of the catalogue, United States Military Academy, contains additional...

  6. 32 CFR 575.6 - Catalogue, United States Military Academy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Catalogue, United States Military Academy. 575.6... ADMISSION TO THE UNITED STATES MILITARY ACADEMY § 575.6 Catalogue, United States Military Academy. The latest edition of the catalogue, United States Military Academy, contains additional...

  7. Multimedia Learning Systems Based on IEEE Learning Object Metadata (LOM).

    ERIC Educational Resources Information Center

    Holzinger, Andreas; Kleinberger, Thomas; Muller, Paul

    One of the "hottest" topics in recent information systems and computer science is metadata. Learning Object Metadata (LOM) appears to be a very powerful mechanism for representing metadata, because of the great variety of LOM Objects. This is on of the reasons why the LOM standard is repeatedly cited in projects in the field of eLearning Systems.…

  8. Enhancing SCORM Metadata for Assessment Authoring in E-Learning

    ERIC Educational Resources Information Center

    Chang, Wen-Chih; Hsu, Hui-Huang; Smith, Timothy K.; Wang, Chun-Chia

    2004-01-01

    With the rapid development of distance learning and the XML technology, metadata play an important role in e-Learning. Nowadays, many distance learning standards, such as SCORM, AICC CMI, IEEE LTSC LOM and IMS, use metadata to tag learning materials. However, most metadata models are used to define learning materials and test problems. Few…

  9. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  10. MCM generator: a Java-based tool for generating medical metadata.

    PubMed

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files.

  11. MCM generator: a Java-based tool for generating medical metadata.

    PubMed

    Munoz, F; Hersh, W

    1998-01-01

    In a previous paper we introduced the need to implement a mechanism to facilitate the discovery of relevant Web medical documents. We maintained that the use of META tags, specifically ones that define the medical subject and resource type of a document, help towards this goal. We have now developed a tool to facilitate the generation of these tags for the authors of medical documents. Written entirely in Java, this tool makes use of the SAPHIRE server, and helps the author identify the Medical Subject Heading terms that most appropriately describe the subject of the document. Furthermore, it allows the author to generate metadata tags for the 15 elements that the Dublin Core considers as core elements in the description of a document. This paper describes the use of this tool in the cataloguing of Web and non-Web medical documents, such as images, movie, and sound files. PMID:9929299

  12. The International Learning Object Metadata Survey

    ERIC Educational Resources Information Center

    Friesen, Norm

    2004-01-01

    A wide range of projects and organizations is currently making digital learning resources (learning objects) available to instructors, students, and designers via systematic, standards-based infrastructures. One standard that is central to many of these efforts and infrastructures is known as Learning Object Metadata (IEEE 1484.12.1-2002, or LOM).…

  13. Tracking Actual Usage: The Attention Metadata Approach

    ERIC Educational Resources Information Center

    Wolpers, Martin; Najjar, Jehad; Verbert, Katrien; Duval, Erik

    2007-01-01

    The information overload in learning and teaching scenarios is a main hindering factor for efficient and effective learning. New methods are needed to help teachers and students in dealing with the vast amount of available information and learning material. Our approach aims to utilize contextualized attention metadata to capture behavioural…

  14. DIRAC File Replica and Metadata Catalog

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Poss, S.

    2012-12-01

    File replica and metadata catalogs are essential parts of any distributed data management system, which are largely determining its functionality and performance. A new File Catalog (DFC) was developed in the framework of the DIRAC Project that combines both replica and metadata catalog functionality. The DFC design is based on the practical experience with the data management system of the LHCb Collaboration. It is optimized for the most common patterns of the catalog usage in order to achieve maximum performance from the user perspective. The DFC supports bulk operations for replica queries and allows quick analysis of the storage usage globally and for each Storage Element separately. It supports flexible ACL rules with plug-ins for various policies that can be adopted by a particular community. The DFC catalog allows to store various types of metadata associated with files and directories and to perform efficient queries for the data based on complex metadata combinations. Definition of file ancestor-descendent relation chains is also possible. The DFC catalog is implemented in the general DIRAC distributed computing framework following the standard grid security architecture. In this paper we describe the design of the DFC and its implementation details. The performance measurements are compared with other grid file catalog implementations. The experience of the DFC Catalog usage in the CLIC detector project are discussed.

  15. Digital Preservation and Metadata: History, Theory, Practice.

    ERIC Educational Resources Information Center

    Lazinger, Susan S.

    This book addresses critical issues of digital preservation, providing guidelines for protecting resources from dealing with obsolescence, to responsibilities, methods of preservation, cost, and metadata formats. It also shows numerous national and international institutions that provide frameworks for digital libraries and archives. The first…

  16. Metadata management and semantics in microarray repositories.

    PubMed

    Kocabaş, F; Can, T; Baykal, N

    2011-12-01

    The number of microarray and other high-throughput experiments on primary repositories keeps increasing as do the size and complexity of the results in response to biomedical investigations. Initiatives have been started on standardization of content, object model, exchange format and ontology. However, there are backlogs and inability to exchange data between microarray repositories, which indicate that there is a great need for a standard format and data management. We have introduced a metadata framework that includes a metadata card and semantic nets that make experimental results visible, understandable and usable. These are encoded in syntax encoding schemes and represented in RDF (Resource Description Frame-word), can be integrated with other metadata cards and semantic nets, and can be exchanged, shared and queried. We demonstrated the performance and potential benefits through a case study on a selected microarray repository. We concluded that the backlogs can be reduced and that exchange of information and asking of knowledge discovery questions can become possible with the use of this metadata framework. PMID:24052712

  17. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  18. Seismic Catalogue and Seismic Network in Haiti

    NASA Astrophysics Data System (ADS)

    Belizaire, D.; Benito, B.; Carreño, E.; Meneses, C.; Huerfano, V.; Polanco, E.; McCormack, D.

    2013-05-01

    The destructive earthquake occurred on January 10, 2010 in Haiti, highlighted the lack of preparedness of the country to address seismic phenomena. At the moment of the earthquake, there was no seismic network operating in the country, and only a partial control of the past seismicity was possible, due to the absence of a national catalogue. After the 2010 earthquake, some advances began towards the installation of a national network and the elaboration of a seismic catalogue providing the necessary input for seismic Hazard Studies. This paper presents the state of the works carried out covering both aspects. First, a seismic catalogue has been built, compiling data of historical and instrumental events occurred in the Hispaniola Island and surroundings, in the frame of the SISMO-HAITI project, supported by the Technical University of Madrid (UPM) and Developed in cooperation with the Observatoire National de l'Environnement et de la Vulnérabilité of Haiti (ONEV). Data from different agencies all over the world were gathered, being relevant the role of the Dominican Republic and Puerto Rico seismological services which provides local data of their national networks. Almost 30000 events recorded in the area from 1551 till 2011 were compiled in a first catalogue, among them 7700 events with Mw ranges between 4.0 and 8.3. Since different magnitude scale were given by the different agencies (Ms, mb, MD, ML), this first catalogue was affected by important heterogeneity in the size parameter. Then it was homogenized to moment magnitude Mw using the empirical equations developed by Bonzoni et al (2011) for the eastern Caribbean. At present, this is the most exhaustive catalogue of the country, although it is difficult to assess its degree of completeness. Regarding the seismic network, 3 stations were installed just after the 2010 earthquake by the Canadian Government. The data were sent by telemetry thought the Canadian System CARINA. In 2012, the Spanish IGN together

  19. Metadata for WIS and WIGOS: GAW Profile of ISO19115 and Draft WIGOS Core Metadata Standard

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Howe, Brian

    2014-05-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority to underpin all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). The development of the WIGOS Operational Information Resource (WIR) is central to the WIGOS Framework Implementation Plan (WIGOS-IP). The WIR shall provide information on WIGOS and its observing components, as well as requirements of WMO application areas. An important aspect is the description of the observational capabilities by way of structured metadata. The Global Atmosphere Watch is the WMO program addressing the chemical composition and selected physical properties of the atmosphere. Observational data are collected and archived by GAW World Data Centres (WDCs) and related data centres. The Task Team on GAW WDCs (ET-WDC) have developed a profile of the ISO19115 metadata standard that is compliant with the WMO Information System (WIS) specification for the WMO Core Metadata Profile v1.3. This profile is intended to harmonize certain aspects of the documentation of observations as well as the interoperability of the WDCs. The Inter-Commission-Group on WIGOS (ICG-WIGOS) has established the Task Team on WIGOS Metadata (TT-WMD) with representation of all WMO Technical Commissions and the objective to define the WIGOS Core Metadata. The result of this effort is a draft semantic standard comprising of a set of metadata classes that are considered to be of critical importance for the interpretation of observations relevant to WIGOS. The purpose of the presentation is to acquaint the audience with the standard and to solicit informal feed-back from experts in the various disciplines of meteorology and climatology. This feed-back will help ET-WDC and TT-WMD to refine the GAW metadata profile and the draft WIGOS metadata standard, thereby increasing their utility and acceptance.

  20. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    ERIC Educational Resources Information Center

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  1. Seachable Solar Feature Catalogues in EGSO

    NASA Astrophysics Data System (ADS)

    Zharkova, V. V.; Aboudarham, J.; Zharkov, S. I.; Ipson, S. S.; Benkhalil, A. K.; Fuller, N.

    2004-12-01

    The searchable Solar Feature Catalogues (SFC) developed using automated pattern recognition techniques from digitized solar images are presented. The techniques were applied for detection of sunspots, active regions, filaments and line-on-sight magnetic neutral lines in the automatically standardized full disk solar images in Ca II K1, Ca II K3 and Ha taken at the Meudon Observatory and white light images and magnetograms from SOHO/MDI. The results of automated recognition were verified with the manual synoptic maps and available statistical data that revealed good detection accuracy. Based on the recognized parameters a structured database of the Solar Feature Catalogues was built on a mysql server for every feature and published with various pre-designed search pages on the Bradford University web site. The SFCs with 10 year coverage (1996-2005) is to be used for deeper investigation of the solar activity, activity feature classification and forecast.

  2. Cosmic web reconstruction through density ridges: catalogue

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Chi; Ho, Shirley; Brinkmann, Jon; Freeman, Peter E.; Genovese, Christopher R.; Schneider, Donald P.; Wasserman, Larry

    2016-10-01

    We construct a catalogue for filaments using a novel approach called SCMS (subspace constrained mean shift). SCMS is a gradient-based method that detects filaments through density ridges (smooth curves tracing high-density regions). A great advantage of SCMS is its uncertainty measure, which allows an evaluation of the errors for the detected filaments. To detect filaments, we use data from the Sloan Digital Sky Survey, which consist of three galaxy samples: the NYU main galaxy sample (MGS), the LOWZ sample and the CMASS sample. Each of the three data set covers different redshift regions so that the combined sample allows detection of filaments up to z = 0.7. Our filament catalogue consists of a sequence of two-dimensional filament maps at different redshifts that provide several useful statistics on the evolution cosmic web. To construct the maps, we select spectroscopically confirmed galaxies within 0.050 < z < 0.700 and partition them into 130 bins. For each bin, we ignore the redshift, treating the galaxy observations as a 2-D data and detect filaments using SCMS. The filament catalogue consists of 130 individual 2-D filament maps, and each map comprises points on the detected filaments that describe the filamentary structures at a particular redshift. We also apply our filament catalogue to investigate galaxy luminosity and its relation with distance to filament. Using a volume-limited sample, we find strong evidence (6.1σ-12.3σ) that galaxies close to filaments are generally brighter than those at significant distance from filaments.

  3. Remapping simulated halo catalogues in redshift space

    NASA Astrophysics Data System (ADS)

    Mead, A. J.; Peacock, J. A.

    2014-12-01

    We discuss the extension to redshift space of a rescaling algorithm, designed to alter the effective cosmology of a pre-existing simulated particle distribution or catalogue of dark matter haloes. The rescaling approach was initially developed by Angulo & White and was adapted and applied to halo catalogues in real space in our previous work. This algorithm requires no information other than the initial and target cosmological parameters, and it contains no tuned parameters. It is shown here that the rescaling method also works well in redshift space, and that the rescaled simulations can reproduce the growth rate of cosmological density fluctuations appropriate for the target cosmology. Even when rescaling a grossly non-standard model with Λ = 0 and zero baryons, the redshift-space power spectrum of standard Λ cold dark matter can be reproduced to about 5 per cent error for k < 0.2 h Mpc-1. The ratio of quadrupole-to-monopole power spectra remains correct to the same tolerance up to k = 1 h Mpc-1, provided that the input halo catalogue contains measured internal velocity dispersions.

  4. The star catalogue of Hevelius. Machine-readable version and comparison with the modern Hipparcos Catalogue

    NASA Astrophysics Data System (ADS)

    Verbunt, F.; van Gent, R. H.

    2010-06-01

    The catalogue by Johannes Hevelius with the positions and magnitudes of 1564 entries was published by his wife Elisabeth Koopman in 1690. We provide a machine-readable version of the catalogue, and briefly discuss its accuracy on the basis of comparison with data from the modern Hipparcos Catalogue. We compare our results with an earlier analysis by Rybka (1984), finding good overall agreement. The magnitudes given by Hevelius correlate well with modern values. The accuracy of his position measurements is similar to that of Brahe, with σ = 2´ for longitudes and latitudes, but with more errors >5´ than expected for a Gaussian distribution. The position accuracy decreases slowly with magnitude. The fraction of stars with position errors larger than a degree is 1.5%, rather smaller than the fraction of 5% in the star catalogue of Brahe. Star catalogue of Hevelius is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/516/A29

  5. GEO Label Web Services for Dynamic and Effective Communication of Geospatial Metadata Quality

    NASA Astrophysics Data System (ADS)

    Lush, Victoria; Nüst, Daniel; Bastin, Lucy; Masó, Joan; Lumsden, Jo

    2014-05-01

    -like label, which are coloured according to metadata availability and are clickable to allow a user to engage with the original metadata and explore specific aspects in more detail. To support this graphical representation and allow for wider deployment architectures we have implemented two Web services, a PHP and a Java implementation, that generate GEO label representations by combining producer metadata (from standard catalogues or other published locations) with structured user feedback. Both services accept encoded URLs of publicly available metadata documents or metadata XML files as HTTP POST and GET requests and apply XPath and XSLT mappings to transform producer and feedback XML documents into clickable SVG GEO label representations. The label and services are underpinned by two XML-based quality models. The first is a producer model that extends ISO 19115 and 19157 to allow fuller citation of reference data, presentation of pixel- and dataset- level statistical quality information, and encoding of 'traceability' information on the lineage of an actual quality assessment. The second is a user quality model (realised as a feedback server and client) which allows reporting and query of ratings, usage reports, citations, comments and other domain knowledge. Both services are Open Source and are available on GitHub at https://github.com/lushv/geolabel-service and https://github.com/52North/GEO-label-java. The functionality of these services can be tested using our GEO label generation demos, available online at http://www.geolabel.net/demo.html and http://geoviqua.dev.52north.org/glbservice/index.jsf.

  6. New Pulkovo combined catalogues of the radio source positions.

    NASA Astrophysics Data System (ADS)

    Sokolova, Yulia; Malkin, Zinovy

    2012-08-01

    Catalogues of radio source positions (RSC) obtained from Very Long Baseline Interferometry (VLBI) observations serve as the realizations of the IAU International Celestial Reference System (ICRS) since 1998. With increasing of the volume and improving the accuracy of VLBI observations with time, development of advanced methods of the RSC construction is a topical problem to realize the full potential of VLBI technology. This task becomes more and more important nowadays in view of expected during coming years realization of the VLBI2010 network and highly accurate GAIA catalogue of optical positions of extragalactic objects. One of the commonly used methods of improving the accuracy of the source position catalogues is construction of a combined catalogue. In this paper, we present new Pulkovo combined catalogues PUL(2012)C01 and PUL(2012)C02 which have been constructed mainly following the strategy developed by Sokolova & Malkin (2007, A&A, 474, 665). Besides using more data, several developments were realized such as improved method of determination of the optimal number of the expansion terms, more careful investigation of the stochastic errors of input catalogues, improved weighting scheme, additional test of the quality of the individuals and combined RSC, etc. The PUL(2012)C01 catalogue is aimed at stochastic improvement of the ICRF2, the PUL(2012)C02 catalogue is constructed in the independent system. Results of comparison of our combined catalogues with individual catalogues and ICRF2 are presented.

  7. PIMMS tools for capturing metadata about simulations

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Devine, Gerard; Tourte, Gregory; Pascoe, Stephen; Lawrence, Bryan; Barjat, Hannah

    2013-04-01

    PIMMS (Portable Infrastructure for the Metafor Metadata System) provides a method for consistent and comprehensive documentation of modelling activities that enables the sharing of simulation data and model configuration information. The aim of PIMMS is to package the metadata infrastructure developed by Metafor for CMIP5 so that it can be used by climate modelling groups in UK Universities. PIMMS tools capture information about simulations from the design of experiments to the implementation of experiments via simulations that run models. PIMMS uses the Metafor methodology which consists of a Common Information Model (CIM), Controlled Vocabularies (CV) and software tools. PIMMS software tools provide for the creation and consumption of CIM content via a web services infrastructure and portal developed by the ES-DOC community. PIMMS metadata integrates with the ESGF data infrastructure via the mapping of vocabularies onto ESGF facets. There are three paradigms of PIMMS metadata collection: Model Intercomparision Projects (MIPs) where a standard set of questions is asked of all models which perform standard sets of experiments. Disciplinary level metadata collection where a standard set of questions is asked of all models but experiments are specified by users. Bespoke metadata creation where the users define questions about both models and experiments. Examples will be shown of how PIMMS has been configured to suit each of these three paradigms. In each case PIMMS allows users to provide additional metadata beyond that which is asked for in an initial deployment. The primary target for PIMMS is the UK climate modelling community where it is common practice to reuse model configurations from other researchers. This culture of collaboration exists in part because climate models are very complex with many variables that can be modified. Therefore it has become common practice to begin a series of experiments by using another climate model configuration as a starting

  8. HIS Central and the Hydrologic Metadata Catalog

    NASA Astrophysics Data System (ADS)

    Whitenack, T.; Zaslavsky, I.; Valentine, D. W.

    2008-12-01

    The CUAHSI Hydrologic Information System project maintains a comprehensive workflow for publishing hydrologic observations data and registering them to the common Hydrologic Metadata Catalog. Once the data are loaded into a database instance conformant with the CUAHSI HIS Observations Data Model (ODM), the user configures ODM web service template to point to the new database. After this, the hydrologic data become available via the standard CUAHSI HIS web service interface, that includes both data discovery (GetSites, GetVariables, GetSiteInfo, GetVariableInfo) and data retrieval (GetValues) methods. The observations data then can be further exposed via the global semantics-based search engine called Hydroseek. To register the published observations networks to the global search engine, users can now use the HIS Central application (new in HIS 1.1). With this online application, the WaterML-compliant web services can be submitted to the online catalog of data services, along with network metadata and a desired network symbology. Registering services to the HIS Central application triggers a harvester which uses the services to retrieve additional network metadata from the underlying ODM (information about stations, variables, and periods of record). The next step in HIS Central application is mapping variable names from the newly registered network, to the terms used in the global search ontology. Once these steps are completed, the new observations network is added to the map and becomes available for searching and querying. The number of observations network registered to the Hydrologic Metadata Catalog at SDSC is constantly growing. At the time of submission, the catalog contains 51 registered networks, with estimated 1.7 million stations.

  9. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  10. Leveraging Metadata to Create Interactive Images... Today!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.

    2011-01-01

    The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org

  11. Analyzing handwriting biometrics in metadata context

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Wolf, Franziska; Vielhauer, Claus

    2006-02-01

    In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits. Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional (changing) aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills. The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries (Germany, India and Italy) have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics. The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.

  12. Replication fork collapse at replication terminator sequences.

    PubMed

    Bidnenko, Vladimir; Ehrlich, S Dusko; Michel, Bénédicte

    2002-07-15

    Replication fork arrest is a source of genome re arrangements, and the recombinogenic properties of blocked forks are likely to depend on the cause of blockage. Here we study the fate of replication forks blocked at natural replication arrest sites. For this purpose, Escherichia coli replication terminator sequences Ter were placed at ectopic positions on the bacterial chromosome. The resulting strain requires recombinational repair for viability, but replication forks blocked at Ter are not broken. Linear DNA molecules are formed upon arrival of a second round of replication forks that copy the DNA strands of the first blocked forks to the end. A model that accounts for the requirement for homologous recombination for viability in spite of the lack of chromosome breakage is proposed. This work shows that natural and accidental replication arrests sites are processed differently.

  13. The BMW-Chandra Serendipitous Source Catalogue

    NASA Astrophysics Data System (ADS)

    Romano, P.; Campana, S.; Mignani, R. P.; Moretti, A.; Panzera, M. R.; Tagliaferri, G.

    We present the BMW-Chandra Source Catalogue drawn from all Chandra ACIS-I pointed observations with an exposure time in excess of 10 ks public as of March 2003 (136 observations). Using the wavelet detection algorithm developed by \\citep{Lazzatiea99} and \\citep{Campanaea99}, which can characterize point-like as well as extended sources, we identified 21325 sources which were visually inspected and verified. Among them, 16758 are not associated with the targets of the pointings and are considered certain; they have a 0.5-10 keV absorption corrected flux distribution median of ˜ 7 × 10-15 erg cm-2 s-1. The catalogue consists of source positions, count rates, extensions and relative errors in three energy bands (total, 0.5-7 keV; soft, 0.5-2 keV; and hard band, 2-7 keV), as well as the additional information drawn from the headers of the original files. We also extracted source counts in four additional energy bands, (0.5-1.0 keV, 1.0-2.0 keV, 2.0-4.0 keV and 4.0-7.0 keV). We compute the sky coverage in the soft and hard bands. The complete catalogue provides a sky coverage in the soft band (0.5-2 keV, S/N =3) of ˜ 8 deg2 at a limiting flux of ˜ 10-13 erg cm-2 s-1, and ˜ 2 deg2 at a limiting flux of ˜ 10-15 erg cm-2 s-1. http://www.merate.mi.astro.it/~xanadu/BMC/bmc_home.html

  14. Associations Between the Ancient Star Catalogues

    NASA Astrophysics Data System (ADS)

    Duke, Dennis W.

    2002-07-01

    There are just two substantial sources of star coordinates preserved for us from antiquity: the star catalogue of Ptolemy's Almagest, and the rising, setting, and culmination phenomena, along with some star declinations and right ascensions, from Hipparchus' Commentary to Aratus. Given the controversy associated with the idea that Ptolemy's catalogue is, in whole or in substantial part, a copy of an earlier but now lost catalogue of Hipparchus, it is of interest to try to establish clear and significant associations, or the lack thereof, between the two sets of ancient star data. There have been two complementary efforts to clarify the possible associations. Vogt used the phenomena and declinations to reconstruct the ecliptical coordinates of some 122 stars in Hipparchus' Commentary that also appear in the Almagest catalogue. Vogt's conclusion was that since his reconstructed coordinates and the Almagest coordinates were, in general, different, Ptolemy did not obtain his data from Hipparchus. Vogt did notice five stars with very similar errors and concluded that Ptolemy probably did copy those from Hipparchus. More recently, however, Grasshoff has pointed out that there are several reasons to doubt Vogt's conclusion. Further, Grasshoff worked in the opposite direction, using the Almagest coordinates to compute the Hipparchan phenomena, and concluded, for two reasons, that the Almagest data and the Commentary data share a common origin. First, there are a number of stars that share large common errors, and it is highly unlikely that these agreements could be coincidental. Second, the correlation coefficients between the various error sets are typically large and statistically significant, and this also suggests a common origin of the two data sets. However, Grasshoff provided no analysis of the correlations to support this second conclusion. In this paper I will (1) analyze the correlations between the errors of the phenomena and the predictions of these phenomena

  15. Streamlining The Exchange of Metadata through the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH)

    NASA Astrophysics Data System (ADS)

    Ritz, S.; Major, G.; Olsen, L.

    2005-12-01

    NASA's Global Change Master Directory (GCMD) has been collaborating and exchanging science data and services metadata with many U.S. and international partners for several years. These exchanges were primarily manual and extremely time consuming. These exchange methods are no longer practical, because the volume of accessible metadata increases significantly each year. Furthermore, the growing number of accessible metadata repositories has demonstrated the need for a standardized protocol to simplify the harvesting task. The Open Archives Initiative answered the call with the PMH. The Protocol for Metadata Harvesting (PMH), developed through the Open Archives Initiative (OAI) community is making headway in reducing many barriers to the exchange of metadata. By providing a standardized protocol for retrieving metadata from a networked server, it is possible to harvest metadata content without needing to know the server's databases architecture. Streamlining the harvesting process is critical because it will reduce the time it takes for data producers to deliver their metadata to accessible directories. By using a harvester client capable of issuing OAI-PMH queries to an OAI-PMH server, all or portions of an external metadata database can be retrieved in a fast and efficient manner. The GCMD has developed an OAI-PMH compliant metadata harvester client for interoperating with several of its partners. The harvester client and server will be demonstrated. Testing and operational difficulties experienced in this process will also be discussed, along with current partnerships.

  16. Broad Absorption Line Quasar catalogues with Supervised Neural Networks

    SciTech Connect

    Scaringi, Simone; Knigge, Christian; Cottis, Christopher E.; Goad, Michael R.

    2008-12-05

    We have applied a Learning Vector Quantization (LVQ) algorithm to SDSS DR5 quasar spectra in order to create a large catalogue of broad absorption line quasars (BALQSOs). We first discuss the problems with BALQSO catalogues constructed using the conventional balnicity and/or absorption indices (BI and AI), and then describe the supervised LVQ network we have trained to recognise BALQSOs. The resulting BALQSO catalogue should be substantially more robust and complete than BI-or AI-based ones.

  17. Extending the ISC-GEM Global Earthquake Instrumental Catalogue

    NASA Astrophysics Data System (ADS)

    Di Giacomo, Domenico; Engdhal, Bob; Storchak, Dmitry; Villaseñor, Antonio; Harris, James

    2015-04-01

    After a 27-month project funded by the GEM Foundation (www.globalquakemodel.org), in January 2013 we released the ISC-GEM Global Instrumental Earthquake Catalogue (1900 2009) (www.isc.ac.uk/iscgem/index.php) as a special product to use for seismic hazard studies. The new catalogue was necessary as improved seismic hazard studies necessitate that earthquake catalogues are homogeneous (to the largest extent possible) over time in their fundamental parameters, such as location and magnitude. Due to time and resource limitation, the ISC-GEM catalogue (1900-2009) included earthquakes selected according to the following time-variable cut-off magnitudes: Ms=7.5 for earthquakes occurring before 1918; Ms=6.25 between 1918 and 1963; and Ms=5.5 from 1964 onwards. Because of the importance of having a reliable seismic input for seismic hazard studies, funding from GEM and two commercial companies in the US and UK allowed us to start working on the extension of the ISC-GEM catalogue both for earthquakes that occurred beyond 2009 and for earthquakes listed in the International Seismological Summary (ISS) which fell below the cut-off magnitude of 6.25. This extension is part of a four-year program that aims at including in the ISC-GEM catalogue large global earthquakes that occurred before the beginning of the ISC Bulletin in 1964. In this contribution we present the updated ISC GEM catalogue, which will include over 1000 more earthquakes that occurred in 2010 2011 and several hundreds more between 1950 and 1959. The catalogue extension between 1935 and 1949 is currently underway. The extension of the ISC-GEM catalogue will also be helpful for regional cross border seismic hazard studies as the ISC-GEM catalogue should be used as basis for cross-checking the consistency in location and magnitude of those earthquakes listed both in the ISC GEM global catalogue and regional catalogues.

  18. Astrometric Star Catalogues as Combination of Hipparcos/Tycho Catalogues with Ground-Based Observations

    NASA Astrophysics Data System (ADS)

    Vondrak, J.

    The successful ESA mission Hipparcos provided very precise parallaxes, positions and proper motions of many stars in optical wavelength. Therefore, it is a primary representation of International Celestial Reference System in this wavelength. However, the shortness of the mission (less than four years) causes some problems with proper motions of the stars that are double or multiple. Therefore, a combination of the positions measured by Hipparcos satellite with ground-based observations with much longer history provides a better reference frame that is more stable in time. Several examples of such combinations are presented (ACT, TYCHO-2, FK6, GC+HIP, TYC2+HIP, ARIHIP) and briefly described. The stress is put on the most recent Earth Orientation Catalogue (EOC) that uses about 4.4 million optical observations of latitude/universal time variations (made during the twentieth century at 33 observatories in Earth orientation programmes), in combination with some of the above mentioned combined catalogues. The second version of the new catalogue EOC-2 contains 4418 objects, and the precision of their proper motions is far better than that of Hipparcos Catalogue.

  19. An Approach to Metadata Generation for Learning Objects

    NASA Astrophysics Data System (ADS)

    Menendez D., Victor; Zapata G., Alfredo; Vidal C., Christian; Segura N., Alejandra; Prieto M., Manuel

    Metadata describe instructional resources and define their nature and use. Metadata are required to guarantee reusability and interchange of instructional resources into e-Learning systems. However, fulfilment of large metadata attributes is a hard and complex task for almost all LO developers. As a consequence many mistakes are made. This can cause the impoverishment of data quality in indexing, searching and recovering process. We propose a methodology to build Learning Objects from digital resources. The first phase includes automatic preprocessing of resources using techniques from information retrieval. Initial metadata obtained in this first phase are then used to search similar LO to propose missed metadata. The second phase considers assisted activities that merge computer advice with human decisions. Suggestions are based on metadata of similar Learning Object using fuzzy logic theory.

  20. Discussion for the systematic differences of GC and SAOC catalogues.

    NASA Astrophysics Data System (ADS)

    Lu, Peizhen; Xu, Tongqi

    With the calculation method of Bien et al. describing systematic differences, the systematic differences of the GC and SAOC catalogues relating to the FK5 catalogue were calculated. The results were compared with that of the GC through the FK4 catalogue as a reference catalogue. It shows if there are more common stars, the results of two methods are coincident in the main. The systematic differences of SAOC and those of AGK3 and Perth70 were also compared. The results are consistent except in the range near the celestial pole.

  1. A new catalogue of eclipsing binary stars with eccentric orbits

    NASA Astrophysics Data System (ADS)

    Bulut, I.; Demircan, O.

    2007-06-01

    A new catalogue of eclipsing binary stars with eccentric orbits is presented. The catalogue lists the physical parameters (including apsidal motion parameters) of 124 eclipsing binaries with eccentric orbits. In addition, the catalogue also contains a list of 150 candidate systems, about which not much is known at present. Full version of the catalogue is available online (see the Supplementary Material section at the end of this paper) and in electronic form at the CDS via http://cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/MNRAS/(vol)/ (page) E-mail: ibulut@comu.edu.tr

  2. Catalogues of variable stars from Parenago to the present day

    NASA Astrophysics Data System (ADS)

    Samus, N. N.

    2006-04-01

    After World War II, the International Astronomical Union made Soviet astronomers responsible for variable-star catalogues. This work has been continued ever since the first edition of the General Catalogue of Variable Stars compiled by the team headed by P.P. Parenago and B.V. Kukarkin and published in 1948. Currently, the catalogue work is a joint project of the Institute of Astronomy (Russian Academy of Sciences) and the Sternberg Astronomical Institute (Moscow University). This paper is a brief review of recent trends in the field of variable-star catalogues. Problems as well as new prospects related to modern large-scale automatic photometric sky surveys are discussed.

  3. The role of metadata in managing large environmental science datasets. Proceedings

    SciTech Connect

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  4. Photometric stellar catalogue for TV meteor astronomy

    NASA Astrophysics Data System (ADS)

    Leonov, V. A.; Bagrov, A. V.

    2016-01-01

    Photometry for ordinary astrophysics was carefully developed for its own purposes. As stars radiation is very similar to the blackbody radiation, astronomers measure star illumination in wide or narrow calibrated spectral bands. This is enough for star photometry with precise accuracy and for measuring their light flux in these bands in energetic units. Meteors are moving objects and do not allow collection of more photons then they emit. So meteor observers use the whole spectral band that can be covered by sensitivity of their light sensors. This is why measurements of stellar magnitudes of background stars by these sensors are not the same as catalogued star brightness in standard photometric spectral bands. Here we present a special photometric catalogue of 93 bright non-variable stars of the northern hemisphere, that can be used by meteor observers of standard background whose brightness are calculated in energetic units as well as in non-systematic stellar magnitudes in spectral wavelength of the WATEC 902 sensitivity.

  5. Pan-European catalogue of flood events

    NASA Astrophysics Data System (ADS)

    Parajka, Juraj; Mangini, Walter; Viglione, Alberto; Hundecha, Yeshewatesfa; Ceola, Serena

    2016-04-01

    There have been numerous extreme flood events observed in Europe in the past years. One of the way to improve our understanding about causing flood generation mechanisms is to analyse spatial and temporal variability of a large number of flood events. The aim of this study is to present a pan-European catalogue of flood events developed within the SWITCH-ON EU Project. The flood events are identified from daily discharge observations at 1315 stations listed in Global Runoff Data Centre database. The average length of discharge time-series for selected stations is 54 years. For each event, basin boundary and additional hydrological and weather characteristics are extracted. Hydrological characteristics are extracted from the pan-European HYPE model simulations. Precipitation, together with the corresponding proportions of rainfall and snowfall, snowmelt, and evapotranspiration are computed as total amounts between the event start date and event peak date. Soil moisture, soil moisture deficit, and basin accumulated snow water equivalent are computed for the event start date. Weather characteristics are derived from the weather circulation pattern catalogue developed within COST 733 Project. The results are generated in an open data access and tools framework which allows reproduction and extension of results to other regions. More information about the analysis and project are available at: http://www.water-switch-on.eu/lab.html.

  6. HELCATS - Heliospheric Cataloguing, Analysis and Techniques Service

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Harrison, R. A.; Davies, J. A.; Byrne, J.; Perry, C. H.; Moestl, C.; Rouillard, A. P.; Bothmer, V.; Rodriguez, L.; Eastwood, J. P.; Kilpua, E.; Odstrcil, D.; Gallagher, P.

    2015-12-01

    Understanding the evolution of the solar wind is fundamental to advancing our knowledge of energy and mass transport in the Solar System, making it crucial to space weather and its prediction. The advent of truly wide-angle heliospheric imaging has revolutionised the study of both transient (CMEs) and background (IRs) solar wind plasma structures, by enabling their direct and continuous observation out to 1 AU and beyond. The EU-funded FP7 HELCATS project combines European expertise in heliospheric imaging, built up in particular through lead involvement in NASA's STEREO mission, with expertise in solar and coronal imaging as well as in-situ and radio measurements of solar wind phenomena, in a programme of work that will enable a much wider exploitation and understanding of heliospheric imaging observations. The HELCATS project endeavors to catalogue transient and background solar wind structures imaged by STEREO/HI throughout the duration of the mission. This catalogue will include estimates of their kinematic properties using a variety of established and more speculative approaches, which are to be evaluated through comparisons with solar source and in-situ measurements. The potential for driving numerical models from these kinematic properties is to be assessed, as is their complementarity to radio observations, specifically Type II bursts and interplanetary scintillation. This presentation provides an overview of the HELCATS project and its progress in first 18 months of operations.

  7. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire

  8. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web

  9. Content Metadata Standards for Marine Science: A Case Study

    USGS Publications Warehouse

    Riall, Rebecca L.; Marincioni, Fausto; Lightsom, Frances L.

    2004-01-01

    The U.S. Geological Survey developed a content metadata standard to meet the demands of organizing electronic resources in the marine sciences for a broad, heterogeneous audience. These metadata standards are used by the Marine Realms Information Bank project, a Web-based public distributed library of marine science from academic institutions and government agencies. The development and deployment of this metadata standard serve as a model, complete with lessons about mistakes, for the creation of similarly specialized metadata standards for digital libraries.

  10. Syntactic and Semantic Validation without a Metadata Management System

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Gokey, Christopher D.; Kendig, David; Olsen, Lola; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The ability to maintain quality information is essential to securing the confidence in any system for which the information serves as a data source. NASA's Global Change Master Directory (GCMD), an online Earth science data locator, holds over 9000 data set descriptions and is in a constant state of flux as metadata are created and updated on a daily basis. In such a system, the importance of maintaining the consistency and integrity of these-metadata is crucial. The GCMD has developed a metadata management system utilizing XML, controlled vocabulary, and Java technologies to ensure the metadata not only adhere to valid syntax, but also exhibit proper semantics.

  11. Cytoplasmic Viral Replication Complexes

    PubMed Central

    den Boon, Johan A.; Diaz, Arturo; Ahlquist, Paul

    2010-01-01

    Many viruses that replicate in the cytoplasm compartmentalize their genome replication and transcription in organelle-like structures that enhance replication efficiency and protection from host defenses. In particular, recent studies with diverse positive-strand RNA viruses have further elucidated the ultrastructure of membrane-bounded RNA replication complexes and their close coordination with virion assembly and budding. The structure, function and assembly of some positive-strand RNA virus replication complexes have parallels and potential evolutionary links with the replicative cores of double-strand RNA virus and retrovirus virions, and more general similarities with the replication factories of cytoplasmic DNA viruses. PMID:20638644

  12. Metadata: A user`s view

    SciTech Connect

    Bretherton, F.P.; Singley, P.T.

    1994-12-31

    An analysis is presented of the uses of metadata from four aspects of database operations: (1) search, query, retrieval, (2) ingest, quality control, processing, (3) application to application transfer; (4) storage, archive. Typical degrees of database functionality ranging from simple file retrieval to interdisciplinary global query with metadatabase-user dialog and involving many distributed autonomous databases, are ranked in approximate order of increasing sophistication of the required knowledge representation. An architecture is outlined for implementing such functionality in many different disciplinary domains utilizing a variety of off the shelf database management subsystems and processor software, each specialized to a different abstract data model.

  13. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata

  14. The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness.

    PubMed

    Liolios, Konstantinos; Schriml, Lynn; Hirschman, Lynette; Pagani, Ioanna; Nosrat, Bahador; Sterk, Peter; White, Owen; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; Kyrpides, Nikos C; Field, Dawn

    2012-07-30

    Variability in the extent of the descriptions of data ('metadata') held in public repositories forces users to assess the quality of records individually, which rapidly becomes impractical. The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements. Here, we introduce such a measure - the 'Metadata Coverage Index' (MCI): the percentage of available fields actually filled in a record or description. MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. Here we demonstrate the utility of MCI scores using metadata from the Genomes Online Database (GOLD), including records compliant with the 'Minimum Information about a Genome Sequence' (MIGS) standard developed by the Genomic Standards Consortium. We discuss challenges and address the further application of MCI scores; to show improvements in annotation quality over time, to inform the work of standards bodies and repository providers on the usability and popularity of their products, and to assess and credit the work of curators. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework.

  15. Modern Special Collections Cataloguing: A University of London Case Study

    ERIC Educational Resources Information Center

    Attar, K. E.

    2013-01-01

    Recent years have seen a growing emphasis on modern special collections (in themselves no new phenomenon), with a dichotomy between guidance for detailed cataloguing in "Descriptive Cataloging of Rare Materials (Books)" (DCRM(B), 2007) and the value of clearing cataloguing backlogs expeditiously. This article describes the De la Mare Family…

  16. VizieR Online Data Catalog: MHO Catalogue (Davis+, 2010)

    NASA Astrophysics Data System (ADS)

    Davis, C. J.; Gell, R.; Khanzadyan, T.; Smith, M. D.; Jenness, T.

    2010-02-01

    The catalogue contains almost 1000 objects and covers regions on the sky loosely based on the constellations and associated Giant Molecular Clouds (Perseus, Orion A, Orion B, Taurus, etc.); full details are given in the paper. Note also that this catalogue is being maintained (and updated) at a dedicated, search-able website: http://www.jach.hawaii.edu/UKIRT/MHCat/ (2 data files).

  17. Astronomy Visualization Metadata (AVM) in Action!

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Gauthier, A.; Christensen, L. L.; Wyatt, R.

    2009-12-01

    The Astronomy Visualization Metadata (AVM) standard offers a flexible way of embedding extensive information about an astronomical image, illustration, or photograph within the image file itself. Such information includes a spread of basic info including title, caption, credit, and subject (based on a taxonomy optimized for outreach needs). Astronomical images may also be fully tagged with color assignments (associating wavelengths/observatories to colors in the image) and coordinate projection information. Here we present a status update on current ongoing projects utilizing AVM. Ongoing tagging efforts include a variety of missions and observers (including Spitzer, Chandra, Hubble, and amateurs), and the metadata is serving as a database schema for content-managed website development (Spitzer). Software packages are utilizing AVM coordinate headers to allow images to be tagged (FITS Liberator) and correctly registered against the sky backdrop on import (e.g. WorldWide Telescope, Google Sky). Museums and planetariums are exploring the use of AVM contextual information to enrich the presentation of images (partners include the American Museum of Natural History and the California Academy of Sciences). Astronomical institutions are adopting the AVM standard (e.g. IVOA) and planning services to embed and catalog AVM-tagged images (IRSA/VAMP, Aladin). More information is available at www.virtualastronomy.org

  18. Metadata Access Tool for Climate and Health

    NASA Astrophysics Data System (ADS)

    Trtanji, J.

    2012-12-01

    The need for health information resources to support climate change adaptation and mitigation decisions is growing, both in the United States and around the world, as the manifestations of climate change become more evident and widespread. In many instances, these information resources are not specific to a changing climate, but have either been developed or are highly relevant for addressing health issues related to existing climate variability and weather extremes. To help address the need for more integrated data, the Interagency Cross-Cutting Group on Climate Change and Human Health, a working group of the U.S. Global Change Research Program, has developed the Metadata Access Tool for Climate and Health (MATCH). MATCH is a gateway to relevant information that can be used to solve problems at the nexus of climate science and public health by facilitating research, enabling scientific collaborations in a One Health approach, and promoting data stewardship that will enhance the quality and application of climate and health research. MATCH is a searchable clearinghouse of publicly available Federal metadata including monitoring and surveillance data sets, early warning systems, and tools for characterizing the health impacts of global climate change. Examples of relevant databases include the Centers for Disease Control and Prevention's Environmental Public Health Tracking System and NOAA's National Climate Data Center's national and state temperature and precipitation data. This presentation will introduce the audience to this new web-based geoportal and demonstrate its features and potential applications.

  19. PoroTomo Subtask 6.3 Nodal Seismometers Metadata

    DOE Data Explorer

    Lesley Parker

    2016-03-28

    Metadata for the nodal seismometer array deployed at the POROTOMO's Natural Laboratory in Brady Hot Spring, Nevada during the March 2016 testing. Metadata includes location and timing for each instrument as well as file lists of data to be uploaded in a separate submission.

  20. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    ERIC Educational Resources Information Center

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  1. To Teach or Not to Teach: The Ethics of Metadata

    ERIC Educational Resources Information Center

    Barnes, Cynthia; Cavaliere, Frank

    2009-01-01

    Metadata is information about computer-generated documents that is often inadvertently transmitted to others. The problems associated with metadata have become more acute over time as word processing and other popular programs have become more receptive to the concept of collaboration. As more people become involved in the preparation of…

  2. An Enterprise Ontology Building the Bases for Automatic Metadata Generation

    NASA Astrophysics Data System (ADS)

    Thönssen, Barbara

    'Information Overload' or 'Document Deluge' is a problem enterprises and Public Administrations alike are still dealing with. Although commercial products for Enterprise Content or Records Management are available since more than two decades, especially in Small and Medium Enterprises and Public Administrations they didn't get through. Because of the wide range of document types and formats full-text indexing is not sufficient, but assigning metadata manually is not possible. Thus, automatic, format-independent generation of metadata for (public) enterprise documents is needed. Using context to infer metadata automatically has been researched for example for web-documents or learning objects. If (public) enterprise objects were modelled 'machine understandable' they could be build the context for automatic metadata generation. The approach introduced in this paper is to model context (the (public) enterprise objects) in an ontology and using that ontology to infer content-related metadata.

  3. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    PubMed

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  4. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    PubMed Central

    2013-01-01

    Abstract Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  5. Surveying genome replication

    PubMed Central

    Kearsey, Stephen

    2002-01-01

    Two recent studies have added microarrays to the toolkit used to analyze the origins of replication in yeast chromosomes, providing a fuller picture of how genomic DNA replication is organized. PMID:12093380

  6. Replicating repetitive DNA.

    PubMed

    Tognetti, Silvia; Speck, Christian

    2016-05-27

    The function and regulation of repetitive DNA, the 'dark matter' of the genome, is still only rudimentarily understood. Now a study investigating DNA replication of repetitive centromeric chromosome segments has started to expose a fascinating replication program that involves suppression of ATR signalling, in particular during replication stress. PMID:27230530

  7. Catalogue of Meteor Showers and Storms in Korean History

    NASA Astrophysics Data System (ADS)

    Ahn, Sang-Hyeon

    2004-03-01

    We present a more complete and accurate catalogue of astronomical records for meteor showers and meteor storms appeared in primary official Korean history books, such as Samguk-sagi, Koryo-sa, Seungjeongwon-ilgi, and Choson-Wangjo-Sillok. So far the catalogue made by Imoto and Hasegawa in 1958 has been widely used in the international astronomical society. The catalogue is based on a report by Sekiguchi in 1917 that is mainly based on secondary history books. We observed that the catalogue has a number of errors in either dates or sources of the records. We have thoroughly checked the primary official history books, instead of the secondary ones, in order to make a corrected and extended catalogue. The catalogue contains 25 records of meteor storms, four records of intense meteor-showers, and five records of usual showers in Korean history. We also find that some of those records seem to correspond to some presently active meteor showers such as the Leonids, the Perseids, and the ¥ç-Aquarids-Orionids pair. However, a large number of those records do not correspond to such present showers. This catalogue we obtained can be useful for various astrophysical studies in the future.

  8. Meta-data based mediator generation

    SciTech Connect

    Critchlaw, T

    1998-06-28

    Mediators are a critical component of any data warehouse; they transform data from source formats to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases requires a mediator to be updated whenever an associated schema is modified. Failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the most current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. Unfortunately, this maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. This paper describes a meta-data framework, and associated software, designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting to schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  9. Semantic Metadata for Heterogeneous Spatial Planning Documents

    NASA Astrophysics Data System (ADS)

    Iwaniak, A.; Kaczmarek, I.; Łukowicz, J.; Strzelecki, M.; Coetzee, S.; Paluszyński, W.

    2016-09-01

    Spatial planning documents contain information about the principles and rights of land use in different zones of a local authority. They are the basis for administrative decision making in support of sustainable development. In Poland these documents are published on the Web according to a prescribed non-extendable XML schema, designed for optimum presentation to humans in HTML web pages. There is no document standard, and limited functionality exists for adding references to external resources. The text in these documents is discoverable and searchable by general-purpose web search engines, but the semantics of the content cannot be discovered or queried. The spatial information in these documents is geographically referenced but not machine-readable. Major manual efforts are required to integrate such heterogeneous spatial planning documents from various local authorities for analysis, scenario planning and decision support. This article presents results of an implementation using machine-readable semantic metadata to identify relationships among regulations in the text, spatial objects in the drawings and links to external resources. A spatial planning ontology was used to annotate different sections of spatial planning documents with semantic metadata in the Resource Description Framework in Attributes (RDFa). The semantic interpretation of the content, links between document elements and links to external resources were embedded in XHTML pages. An example and use case from the spatial planning domain in Poland is presented to evaluate its efficiency and applicability. The solution enables the automated integration of spatial planning documents from multiple local authorities to assist decision makers with understanding and interpreting spatial planning information. The approach is equally applicable to legal documents from other countries and domains, such as cultural heritage and environmental management.

  10. VizieR Online Data Catalog: XMMOMCDFS catalogue (Antonucci+, 2015)

    NASA Astrophysics Data System (ADS)

    Antonucci, M.; Talavera, A.; Vagnetti, F.; Trevese, D.; Comastri, A.; Paolillo, M.; Ranalli, P.; Vignali, C.

    2014-11-01

    We present the catalogue of the Optical Monitor observations performed by XMM-Newton in the Chandra Deep Field South during the XMM-CDFS Deep Survey (Comastri et al. 2011A&A...526L...9C). The main catalogue provides photometric measurements for 1129 CDFS sources from individual observations and/or from the stacked images, and includes optical/UV/X-ray photometric and spectroscopic information from other surveys. The supplementary catalogue contains 44 alternative XMM-OM identifications for 38 EIS/COMBO-17 sources. (2 data files).

  11. Emedding MPEG-7 metadata within a media file format

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    Embedding metadata within a media file format becomes evermore popular for digital media. Traditional digital media files such as MP3 songs and JPEG photos do not carry any metadata structures to describe the media content until these file formats were extended with ID3 and EXIF. Recently both ID3 and EXIF advanced to version 2.4 and version 2.2 respectively with much added new description tags. Currently, most MP3 players and digital cameras support the latest revisions of these metadata structures as the de-facto standard formats. Given the benefits of having metadata to describe the media content is very critical to consumers for viewing and searching media content. However, both ID3 and EXIF were designed with very different approaches in terms of syntax, semantics, and data structures. Therefore, these two metadata file formats are not compatible and cannot be utilized for other common applications such as slideshow for playing MP3 music in the background and shuffle through images in the foreground. This paper presents the idea of embedding the international standard of ISO/IEC MPEG-7 metadata descriptions inside the rich ISO/IEC MPEG-4 file format container so that a general metadata framework can be used for images, audio, and video applications.

  12. Design and Implementation of a Metadata-rich File System

    SciTech Connect

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  13. Toward a molecular catalogue of synapses.

    PubMed

    Grant, Seth G N

    2007-10-01

    1906 was a landmark year in the history of the study of the nervous system, most notably for the first 'neuroscience' Nobel prize given to the anatomists Ramon Y Cajal and Camillo Golgi. 1906 is less well known for another event, also of great significance for neuroscience, namely the publication of Charles Sherrington's book 'The Integrative Action of the Nervous system'. It was Cajal and Golgi who debated the anatomical evidence for the synapse and it was Sherrington who laid its foundation in electrophysiological function. In tribute to these pioneers in synaptic biology, this article will address the issue of synapse diversity from the molecular point of view. In particular I will reflect upon efforts to obtain a complete molecular characterisation of the synapse and the unexpectedly high degree of molecular complexity found within it. A case will be made for developing approaches that can be used to generate a general catalogue of synapse types based on molecular markers, which should have wide application.

  14. Populating and harvesting metadata in a virtual observatory

    NASA Astrophysics Data System (ADS)

    Walker, Raymond; King, Todd; Joy, Steven; Bargatze, Lee; Chi, Peter; Weygand, James

    Founded in 2007 the Virtual Magnetospheric Observatory (VMO) provides one stop shopping for data and services useful in magnetospheric research. The VMO's purview includes ground based observations as well as observations from spacecraft. The data and services for using and analyzing these data are found at laboratories distributed around the world. The VMO is itself a federated data system with branches at UCLA and the Goddard Space Flight Center (GSFC). These data can be connected by using a common data model. The VMO has selected the Space Physics Archive Search and Extract (SPASE) metadata standard for this purpose. SPASE metadata are collected and stored in distributed registries that are maintained along with the data at the location of the data provider. Populating the registries and extracting the metadata requested for a given study remain major challenges. In general there is little or no money available to data providers to create the metadata and populate the registries. We have taken a two pronged approach to minimize the effort required to create the metadata and maintain the registries. First part of the approach is human. We have appointed a group of domain experts called "X-Men". X-Men are expert in both magnetospheric physics and data management. They work closely with data providers to help them prepare the metadata and populate the registries. The second part of our approach is to develop a series of tools to populate and harvest information from the registries. We have developed SPASE editors for high level metadata and adopted the NASA Planetary Data System's Rule Set approach in which the science data are used to generate detailed level SPASE metadata. Finally we have developed a unique harvesting system to retrieve metadata from distributed registries in response to user queries.

  15. Metadata Creation, Management and Search System for your Scientific Data

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  16. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Manipon, G. M.; Wilson, B. D.; Hua, H.

    2013-12-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom/RSS ';casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain & variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into 'semantic mashups' in the browser. It is straightforward to transform sets of XML metadata, including ISO 19139, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin Core, GeoRSS, etc. All scientific metadata should be part of the LOD world. In addition, we developed an 'instant' drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. Figure 1 shows the first version of the interface for 'instant drill down' into the ECHO datasets. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the 'word cloud' of values for each facet. The display section for each dataset includes the important metadata fields, a full

  17. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an Open

  18. Improving Metadata Compliance for Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be

  19. A Catalogue of XMM-Newton BL Lacs

    NASA Astrophysics Data System (ADS)

    Racero, E.; De la Calle, I.; Rouco Escorial, A.

    2015-05-01

    A catalogue of XMM-Newton BL Lac is presented based on a cross-correlation with the 1374 BL Lac objects listed in the 13th edition of the Veron-Cetty and Veron (2010) catalogue. X-ray counterparts were searched for in the field of view of more than 10000 pointed observations available in the XMM-Newton Archive (XSA) that were public before June 2012. The cross-correlation yielded around 250 XMM-Newton observations, which correspond to 162 different sources. X-ray data from the three EPIC cameras and Optical Monitor data were uniformly analyzed using the latest XMM-Newton Science Analysis System (SAS) version. The catalogue collects X-ray spectral properties, including flux variability, of the sample in the 0.2--10 KeV energy band. All the catalogue products will be made publicly available to the scientific community.

  20. Philosophy and updating of the asteroid photometric catalogue

    NASA Technical Reports Server (NTRS)

    Magnusson, Per; Barucci, M. Antonietta; Capria, M. T.; Dahlgren, Mats; Fulchignoni, Marcello; Lagerkvist, C. I.

    1992-01-01

    The Asteroid Photometric Catalogue now contains photometric lightcurves for 584 asteroids. We discuss some of the guiding principles behind it. This concerns both observers who offer input to it and users of the product.

  1. The evolution of replicators.

    PubMed Central

    Szathmáry, E

    2000-01-01

    Replicators of interest in chemistry, biology and culture are briefly surveyed from a conceptual point of view. Systems with limited heredity have only a limited evolutionary potential because the number of available types is too low. Chemical cycles, such as the formose reaction, are holistic replicators since replication is not based on the successive addition of modules. Replicator networks consisting of catalytic molecules (such as reflexively autocatalytic sets of proteins, or reproducing lipid vesicles) are hypothetical ensemble replicators, and their functioning rests on attractors of their dynamics. Ensemble replicators suffer from the paradox of specificity: while their abstract feasibility seems to require a high number of molecular types, the harmful effect of side reactions calls for a small system size. No satisfactory solution to this problem is known. Phenotypic replicators do not pass on their genotypes, only some aspects of the phenotype are transmitted. Phenotypic replicators with limited heredity include genetic membranes, prions and simple memetic systems. Memes in human culture are unlimited hereditary, phenotypic replicators, based on language. The typical path of evolution goes from limited to unlimited heredity, and from attractor-based to modular (digital) replicators. PMID:11127914

  2. Enhanced Viral Replication by Cellular Replicative Senescence

    PubMed Central

    Kim, Ji-Ae; Seong, Rak-Kyun

    2016-01-01

    Cellular replicative senescence is a major contributing factor to aging and to the development and progression of aging-associated diseases. In this study, we sought to determine viral replication efficiency of influenza virus (IFV) and Varicella Zoster Virus (VZV) infection in senescent cells. Primary human bronchial epithelial cells (HBE) or human dermal fibroblasts (HDF) were allowed to undergo numbers of passages to induce replicative senescence. Induction of replicative senescence in cells was validated by positive senescence-associated β-galactosidase staining. Increased susceptibility to both IFV and VZV infection was observed in senescent HBE and HDF cells, respectively, resulting in higher numbers of plaque formation, along with the upregulation of major viral antigen expression than that in the non-senescent cells. Interestingly, mRNA fold induction level of virus-induced type I interferon (IFN) was attenuated by senescence, whereas IFN-mediated antiviral effect remained robust and potent in virus-infected senescent cells. Additionally, we show that a longevity-promoting gene, sirtuin 1 (SIRT1), has antiviral role against influenza virus infection. In conclusion, our data indicate that enhanced viral replication by cellular senescence could be due to senescence-mediated reduction of virus-induced type I IFN expression. PMID:27799874

  3. Forum Guide to Metadata: The Meaning behind Education Data. NFES 2009-805

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2009

    2009-01-01

    The purpose of this guide is to empower people to more effectively use data as information. To accomplish this, the publication explains what metadata are; why metadata are critical to the development of sound education data systems; what components comprise a metadata system; what value metadata bring to data management and use; and how to…

  4. Minimization of biases in galaxy peculiar velocity catalogues

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.

    2015-07-01

    Galaxy distances and derived radial peculiar velocity catalogues constitute valuable data sets to study the dynamics of the Local Universe. However, such catalogues suffer from biases whose effects increase with the distance. Malmquist biases and lognormal error distribution affect the catalogues. Velocity fields of the Local Universe reconstructed with these catalogues present a spurious overall infall on to the Local Volume if they are not corrected for biases. Such an infall is observed in the reconstructed velocity field obtained when applying the Bayesian Wiener-Filter technique to the raw second radial peculiar velocity catalogue of the Cosmicflows project. In this paper, an iterative method to reduce spurious non-Gaussianities in the radial peculiar velocity distribution, to retroactively derive overall better distance estimates resulting in a minimization of the effects of biases, is presented. This method is tested with mock catalogues. To control the cosmic variance, mocks are built out of different cosmological constrained simulations which resemble the Local Universe. To realistically reproduce the effects of biases, the mocks are constructed to be lookalikes of the second data release of the Cosmicflows project, with respect to the size, distribution of data and distribution of errors. Using a suite of mock catalogues, the outcome of the correction is verified to be affected neither by the added error realization, nor by the data point selection, nor by the constrained simulation. Results are similar for the different tested mocks. After correction, the general infall is satisfactorily suppressed. The method allows us to obtain catalogues which together with the Wiener-Filter technique give reconstructions approximating non-biased velocity fields at 100-150 km s-1 (2-3 h-1 Mpc in terms of linear displacement), the linear theory threshold.

  5. An algorithm to build mock galaxy catalogues using MICE simulations

    NASA Astrophysics Data System (ADS)

    Carretero, J.; Castander, F. J.; Gaztañaga, E.; Crocce, M.; Fosalba, P.

    2015-02-01

    We present a method to build mock galaxy catalogues starting from a halo catalogue that uses halo occupation distribution (HOD) recipes as well as the subhalo abundance matching (SHAM) technique. Combining both prescriptions we are able to push the absolute magnitude of the resulting catalogue to fainter luminosities than using just the SHAM technique and can interpret our results in terms of the HOD modelling. We optimize the method by populating with galaxies friends-of-friends dark matter haloes extracted from the Marenostrum Institut de Ciències de l'Espai dark matter simulations and comparing them to observational constraints. Our resulting mock galaxy catalogues manage to reproduce the observed local galaxy luminosity function and the colour-magnitude distribution as observed by the Sloan Digital Sky Survey. They also reproduce the observed galaxy clustering properties as a function of luminosity and colour. In order to achieve that, the algorithm also includes scatter in the halo mass-galaxy luminosity relation derived from direct SHAM and a modified Navarro-Frenk-White mass density profile to place satellite galaxies in their host dark matter haloes. Improving on general usage of the HOD that fits the clustering for given magnitude limited samples, our catalogues are constructed to fit observations at all luminosities considered and therefore for any luminosity subsample. Overall, our algorithm is an economic procedure of obtaining galaxy mock catalogues down to faint magnitudes that are necessary to understand and interpret galaxy surveys.

  6. HELCATS - Heliospheric Cataloguing, Analysis and Techniques Service

    NASA Astrophysics Data System (ADS)

    Harrison, Richard; Davies, Jackie; Perry, Chris; Moestl, Christian; Rouillard, Alexis; Bothmer, Volker; Rodriguez, Luciano; Eastwood, Jonathan; Kilpua, Emilia; Gallagher, Peter

    2016-04-01

    Understanding the evolution of the solar wind is fundamental to advancing our knowledge of energy and mass transport in the solar system, rendering it crucial to space weather and its prediction. The advent of truly wide-angle heliospheric imaging has revolutionised the study of both transient (CMEs) and background (SIRs/CIRs) solar wind plasma structures, by enabling their direct and continuous observation out to 1 AU and beyond. The EU-funded FP7 HELCATS project combines European expertise in heliospheric imaging, built up in particular through lead involvement in NASA's STEREO mission, with expertise in solar and coronal imaging as well as in-situ and radio measurements of solar wind phenomena, in a programme of work that will enable a much wider exploitation and understanding of heliospheric imaging observations. With HELCATS, we are (1.) cataloguing transient and background solar wind structures imaged in the heliosphere by STEREO/HI, since launch in late October 2006 to date, including estimates of their kinematic properties based on a variety of established techniques and more speculative, approaches; (2.) evaluating these kinematic properties, and thereby the validity of these techniques, through comparison with solar source observations and in-situ measurements made at multiple points throughout the heliosphere; (3.) appraising the potential for initialising advanced numerical models based on these kinematic properties; (4.) assessing the complementarity of radio observations (in particular of Type II radio bursts and interplanetary scintillation) in combination with heliospheric imagery. We will, in this presentation, provide an overview of progress from the first 18 months of the HELCATS project.

  7. The Compiled Catalogue of Photoelectric UBVR Stellar Magnitudes in the TYCHO2 System

    NASA Astrophysics Data System (ADS)

    Relke, E.; Protsyuk, Yu. I.; Andruk, V. M.

    In order to calibrate the images of astronomical photographic plates from the archive of UkrVO was created the compiled catalogue of photoelectric UBVR stellar magnitudes. It is based on: the Kornilov catalogue of 13586 WBVR stellar magnitudes (Kornilov at al., 1991), the Mermilliod catalogue of 68540 UBV stellar magnitudes (Mermilliod, 1991) and the Andruk catalogue of 1141 UBVR stellar magnitudes (Andruk at al.,1995). All original coordinates have the different epoch and equinox. We performed the cross reference of stars from these three catalogues with the Tycho2, UCAC4 and XPM catalogues and created a new photometric catalogue on the epoch and equinox of J2000.0.

  8. The star catalogues of Ptolemaios and Ulugh Beg. Machine-readable versions and comparison with the modern Hipparcos Catalogue

    NASA Astrophysics Data System (ADS)

    Verbunt, F.; van Gent, R. H.

    2012-08-01

    In late antiquity and throughout the middle ages, the positions of stars on the celestial sphere were obtained from the star catalogue of Ptolemaios. A catalogue based on new measurements appeared in 1437, with positions by Ulugh Beg, and magnitudes from the 10th-century astronomer al-Sufi. We provide machine-readable versions of these two star catalogues, based on the editions by Toomer (1998, Ptolemy's Almagest, 2nd edn.) and Knobel (1917, Ulugh Beg's catalogue of stars), and determine their accuracies by comparison with the modern Hipparcos Catalogue. The magnitudes in the catalogues correlate well with modern visual magnitudes; the indication "faint" by Ptolemaios is found to correspond to his magnitudes 5 and 6. Gaussian fits to the error distributions in longitude/latitude give widths σ ≃ 27'/23' in the range |Δλ,Δβ| < 50' for Ptolemaios and σ ≃ 22'/18' in Ulugh Beg. Fits to the range |Δλ,Δβ| < 100' gives 10-15% larger widths, showing that the error distributions are broader than Gaussians. The fraction of stars with positions wrong by more than 150' is about 2% for Ptolemaios and 0.1% in Ulugh Beg; the numbers of unidentified stars are 1 in Ptolemaios and 3 in Ulugh Beg. These numbers testify to the excellent quality of both star catalogues (as edited by Toomer and Knobel). Machine-readable catalogues are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/544/A31

  9. hFits: From Storing Metadata to Publishing ESO Data

    NASA Astrophysics Data System (ADS)

    Vera, I.; Dobrzycki, A.; Vuong, M.; Da Rocha, C.

    2012-09-01

    The ESO Archive holds ca. 20 million FITS files: raw observations taken at the La Silla Paranal Observatory in Chile, data from APEX and UKIRT(WFCAM) telescopes, pipeline-processed data generated by the Quality Control and Data Processing Group in ESO Garching and, since recently, reduced data delivered by the PI's through the ESO Phase 3 infrastructure. A metadata repository has been developed at the ESO Archive (Dobrzycki et al. 2007), (Vera et al. 2011), to hold all the FITS file headers with up-to-date information using data warehouse technology. Presently, the repository contains more that 10 billion keywords from headers of all ESO FITS files. We have added to the repository a mechanism for keeping track of header ingestion and modification, allowing to build incremental applications on top of it. The aim is to provide a framework allowing for creation of fast and good quality metadata query services. We present hFits, a tool for data publishing allowing for metadata enhancement. The tool reads from the metadata repository and inserts the metadata into the conventional relational database systems using a simple configuration framework. It utilises the metadata repository tracking mechanism to incrementally refresh the services and transparently propagate any metadata updates. It supports the use of user defined functions where, for example, WCS coordinates can be calculated or related metadata can be extracted from other information systems, and for each new header file it provides to the archived file the access attributes following ESO data access policy, publishing the data to the community.

  10. Massive Meta-Data: A New Data Mining Resource

    NASA Astrophysics Data System (ADS)

    Hugo, W.

    2012-04-01

    Worldwide standardisation, and interoperability initiatives such as GBIF, Open Access and GEOSS (to name but three of many) have led to the emergence of interlinked and overlapping meta-data repositories containing, potentially, tens of millions of entries collectively. This forms the backbone of an emerging global scientific data infrastructure that is both driven by changes in the way we work, and opens up new possibilities in management, research, and collaboration. Several initiatives are concentrated on building a generalised, shared, easily available, scalable, and indefinitely preserved scientific data infrastructure to aid future scientific work. This paper deals with the parallel aspect of the meta-data that will be used to support the global scientific data infrastructure. There are obvious practical issues (semantic interoperability and speed of discovery being the most important), but we are here more concerned with some of the less obvious conceptual questions and opportunities: 1. Can we use meta-data to assess, pinpoint, and reduce duplication of meta-data? 2. Can we use it to reduce overlaps of mandates in data portals, research collaborations, and research networks? 3. What possibilities exist for mining the relationships that exist implicitly in very large meta-data collections? 4. Is it possible to define an explicit 'scientific data infrastructure' as a complex, multi-relational network database, that can become self-maintaining and self-organising in true Web 2.0 and 'social networking' fashion? The paper provides a blueprint for a new approach to massive meta-data collections, and how this can be processed using established analysis techniques to answer the questions posed. It assesses the practical implications of working with standard meta-data definitions (such as ISO 19115, Dublin Core, and EML) in a meta-data mining context, and makes recommendations in respect of extension to support self-organising, semantically oriented 'networks of

  11. The importance of metrological metadata in the environmental monitoring

    NASA Astrophysics Data System (ADS)

    Santana, Márcio A. A.; Guimarães, Patrícia L. O.; Almêida, Eugênio S.; Eklin, Tero

    2016-07-01

    The metrological metadata propagation contributes significantly to improve the data analysis of the meteorological observation systems. An overview of the scenarios data and metadata treatment in environmental monitoring is presented in this article. We also discussed the ways of use of the calibration results on the meteorological measurement systems as well as the convergence of the methods used in the corrections treatment and estimation of the measuring uncertainty in metrological and meteorological areas.

  12. Using grid-enabled distributed metadata database to index DICOM-SR.

    PubMed

    Blanquer, Ignacio; Hernandez, Vicente; Salavert, José; Segrelles, Damià

    2009-01-01

    Integrating medical data at inter-centre level implies many challenges that are being tackled from many disciplines and technologies. Medical informatics have applied an important effort on describing and standardizing Electronic Health Records, and specially standardisation has achieved an important extent on Medical Imaging. Grid technologies have been extensively used to deal with multi-domain authorisation issues and to provide single access points for accessing DICOM Medical Images, enabling the access and processing to large repositories of data. However, this approach introduces the challenge of efficiently organising data according to their relevance and interest, in which the medical report is a key factor. The present work shows an approach to efficiently code radiology reports to enable the multi-centre federation of data resources. This approach follows the tree-like structure of DICOM-SR reports in a self-organising metadata catalogue based on AMGA. This approach enables federating different but compatible distributed repositories, automatically reconfiguring the database structure, and preserving the autonomy of each centre in defining the template. Tools developed so far and some performance results are provided to prove the effectiveness of the approach.

  13. Better Living Through Metadata: Examining Archive Usage

    NASA Astrophysics Data System (ADS)

    Becker, G.; Winkelman, S.; Rots, A.

    2013-10-01

    The primary purpose of an observatory's archive is to provide access to the data through various interfaces. User interactions with the archive are recorded in server logs, which can be used to answer basic questions like: Who has downloaded dataset X? When did she do this? Which tools did she use? The answers to questions like these fill in patterns of data access (e.g., how many times dataset X has been downloaded in the past three years). Analysis of server logs provides metrics of archive usage and provides feedback on interface use which can be used to guide future interface development. The Chandra X-ray Observatory is fortunate in that a database to track data access and downloads has been continuously recording such transactions for years; however, it is overdue for an update. We will detail changes we hope to effect and the differences the changes may make to our usage metadata picture. We plan to gather more information about the geographic location of users without compromising privacy; create improved archive statistics; and track and assess the impact of web “crawlers” and other scripted access methods on the archive. With the improvements to our download tracking we hope to gain a better understanding of the dissemination of Chandra's data; how effectively it is being done; and perhaps discover ideas for new services.

  14. Evolving Metadata in NASA Earth Science Data Systems

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  15. Maximum likelihood random galaxy catalogues and luminosity function estimation

    NASA Astrophysics Data System (ADS)

    Cole, Shaun

    2011-09-01

    We present a new algorithm to generate a random (unclustered) version of an magnitude limited observational galaxy redshift catalogue. It takes into account both galaxy evolution and the perturbing effects of large-scale structure. The key to the algorithm is a maximum likelihood (ML) method for jointly estimating both the luminosity function (LF) and the overdensity as a function of redshift. The random catalogue algorithm then works by cloning each galaxy in the original catalogue, with the number of clones determined by the ML solution. Each of these cloned galaxies is then assigned a random redshift uniformly distributed over the accessible survey volume, taking account of the survey magnitude limit(s) and, optionally, both luminosity and number density evolution. The resulting random catalogues, which can be employed in traditional estimates of galaxy clustering, make fuller use of the information available in the original catalogue and hence are superior to simply fitting a functional form to the observed redshift distribution. They are particularly well suited to studies of the dependence of galaxy clustering on galaxy properties as each galaxy in the random catalogue has the same list of attributes as measured for the galaxies in the genuine catalogue. The derivation of the joint overdensity and LF estimator reveals the limit in which the ML estimate reduces to the standard 1/Vmax LF estimate, namely when one makes the prior assumption that the are no fluctuations in the radial overdensity. The new ML estimator can be viewed as a generalization of the 1/Vmax estimate in which Vmax is replaced by a density corrected Vdc, max.

  16. A prototype catalogue: DOE National Laboratory technologies for infrastructure modernization

    SciTech Connect

    Currie, J.W.; Wilfert, G.L.; March, F.

    1990-01-01

    The purpose of this report is to provide the Office of Technology Assessment (OTA) with information about selected technologies under development in the Department of Energy (DOE) through its National Laboratory System and its Program Office operations. The technologies selected are those that have the potential to improve the performance of the nation's public works infrastructure. The product is a relational database that we refer to as a prototype catalogue of technologies.'' The catalogue contains over 100 entries of DOE-supported technologies having potential application to infrastructure-related problems. The work involved conceptualizing an approach, developing a framework for organizing technology information, and collecting samples of readily available data to be put into a prototype catalogue. In developing the catalogue, our objectives were to demonstrate the concept and provide readily available information to OTA. As such, the catalogue represents a preliminary product. The existing database is not exhaustive and likely represents only a fraction of relevant technologies developed by DOE. In addition, the taxonomy we used to classify technologies is based on the judgment of project staff and has received minimal review by individuals who have been involved in the development and testing of the technologies. Finally, end users will likely identify framework changes and additions that will strengthen the catalogue approach. The framework for the catalogue includes four components: a description of the technology, along with potential uses and other pertinent information; identification of the source of the descriptive information; identification of a person or group knowledgeable about the technology; and a classification of the described technology in terms of its type, application, life-cycle use, function, and readiness.

  17. Who Needs Replication?

    ERIC Educational Resources Information Center

    Porte, Graeme

    2013-01-01

    In this paper, the editor of a recent Cambridge University Press book on research methods discusses replicating previous key studies to throw more light on their reliability and generalizability. Replication research is presented as an accepted method of validating previous research by providing comparability between the original and replicated…

  18. OriDB: a DNA replication origin database.

    PubMed

    Nieduszynski, Conrad A; Hiraga, Shin-ichiro; Ak, Prashanth; Benham, Craig J; Donaldson, Anne D

    2007-01-01

    Replication of eukaryotic chromosomes initiates at multiple sites called replication origins. Replication origins are best understood in the budding yeast Saccharomyces cerevisiae, where several complementary studies have mapped their locations genome-wide. We have collated these datasets, taking account of the resolution of each study, to generate a single list of distinct origin sites. OriDB provides a web-based catalogue of these confirmed and predicted S.cerevisiae DNA replication origin sites. Each proposed or confirmed origin site appears as a record in OriDB, with each record comprising seven pages. These pages provide, in text and graphical formats, the following information: genomic location and chromosome context of the origin site; time of origin replication; DNA sequence of proposed or experimentally confirmed origin elements; free energy required to open the DNA duplex (stress-induced DNA duplex destabilization or SIDD); and phylogenetic conservation of sequence elements. In addition, OriDB encourages community submission of additional information for each origin site through a User Notes facility. Origin sites are linked to several external resources, including the Saccharomyces Genome Database (SGD) and relevant publications at PubMed. Finally, a Chromosome Viewer utility allows users to interactively generate graphical representations of DNA replication data genome-wide. OriDB is available at www.oridb.org.

  19. The AKARI Far-Infrared Bright Source Catalogue

    NASA Astrophysics Data System (ADS)

    Yamamura, Issei

    The AKARI Far-Infrared Bright Source Catalogue is now available to the world-wide astro-nomical community. The catalogue contains about 0.4 million infrared sources detected and measured at four far-infrared wavelengths bands centered at 65, 90, 140, 160 µm. It shall be used as one of the standard resources in the modern astronomical researches. AKARI, the first Japanese infrared astronomical satellite, was launched in February 2006, and carried out an all-sky survey during its 16 months cryogenic mission lifetime. AKARI is equipped with a 68.5 cm cooled telescope and two instruments, the Infrared Camera (IRC; 1.8-26.5 µm) and the Far-Infrared Surveyor (FIS; 50-180 µm). The All-Sky Survey was made in two bands (9 and 18 µm) by the IRC and in four wavelengths by the FIS. The FIR survey covered more than 94 per cent of the entire sky with more than two independent scans. The first public release of the point source catalogue is made in March 2010. The catalogue is intended to provide a source list of uniform detection limit over the entire sky (in fact the limit is different in the Galactic Plane regions). It includes about 0.4 million sources. The detection limit at the most sensitive band, 90 µm, is about 0.6 Jy. The flux uncertainty is 20-40 per cent, depending on the band. The position information is as accurate as about 5 arcsec, enabling us to carry out a precise identification using other catalogues. The Mid-Infrared Point Source Catalogue is also released at the same time. With the higher spatial resolution and better sensitivity, the AKARI catalogues shall be used as the standard infrared source list in various fields in astronomy. The catalogue provides a good opportunity for follow-up observations with Herschel, Plank, SOFIA, ALMA, and other ongoing and future facilities. AKARI is a JAXA project with the participation of ESA. The Far-Infrared Bright Source Catalogue is the product of collaborations between the institutes in Japan, Korea, UK, and the

  20. Identifying and relating biological concepts in the Catalogue of Life

    PubMed Central

    2011-01-01

    Background In this paper we describe our experience of adding globally unique identifiers to the Species 2000 and ITIS Catalogue of Life, an on-line index of organisms which is intended, ultimately, to cover all the world's known species. The scientific species names held in the Catalogue are names that already play an extensive role as terms in the organisation of information about living organisms in bioinformatics and other domains, but the effectiveness of their use is hindered by variation in individuals' opinions and understanding of these terms; indeed, in some cases more than one name will have been used to refer to the same organism. This means that it is desirable to be able to give unique labels to each of these differing concepts within the catalogue and to be able to determine which concepts are being used in other systems, in order that they can be associated with the concepts in the catalogue. Not only is this needed, but it is also necessary to know the relationships between alternative concepts that scientists might have employed, as these determine what can be inferred when data associated with related concepts is being processed. A further complication is that the catalogue itself is evolving as scientific opinion changes due to an increasing understanding of life. Results We describe how we are using Life Science Identifiers (LSIDs) as globally unique identifiers in the Catalogue of Life, explaining how the mapping to species concepts is performed, how concepts are associated with specific editions of the catalogue, and how the Taxon Concept Schema has been adopted in order to express information about concepts and their relationships. We explore the implications of using globally unique identifiers in order to refer to abstract concepts such as species, which incorporate at least a measure of subjectivity in their definition, in contrast with the more traditional use of such identifiers to refer to more tangible entities, events, documents

  1. A catalogue of AKARI FIS BSC extragalactic objects

    NASA Astrophysics Data System (ADS)

    Marton, Gabor; Toth, L. Viktor; Gyorgy Balazs, Lajos

    2015-08-01

    We combined photometric data of about 70 thousand point sources from the AKARI Far-Infrared Surveyor Bright Source Catalogue with AllWISE catalogue data to identify galaxies. We used Quadratic Discriminant Analysis (QDA) to classify our sources. The classification was based on a 6D parameter space that contained AKARI [F65/F90], [F90/F140], [F140/F160] and WISE W1-W2 colours along with WISE W1 magnitudes and AKARI [F140] flux values. Sources were classified into 3 main objects types: YSO candidates, evolved stars and galaxies. The training samples were SIMBAD entries of the input point sources wherever an associated SIMBAD object was found within a 30 arcsecond search radius. The QDA resulted more than 5000 AKARI galaxy candidate sources. The selection was tested cross-correlating our AKARI extragalactic catalogue with the Revised IRAS-FSC Redshift Catalogue (RIFSCz). A very good match was found. A further classification attempt was also made to differentiate between extragalactic subtypes using Support Vector Machines (SVMs). The results of the various methods showed that we can confidently separate cirrus dominated objects (type 1 of RIFSCz). Some of our “galaxy candidate” sources are associated with 2MASS extended objects, and listed in the NASA Extragalactic Database so far without clear proofs of their extragalactic nature. Examples will be presented in our poster. Finally other AKARI extragalactic catalogues will be also compared to our statistical selection.

  2. Creating mock catalogues of stellar haloes from cosmological simulations

    NASA Astrophysics Data System (ADS)

    Lowing, Ben; Wang, Wenting; Cooper, Andrew; Kennedy, Rachel; Helly, John; Cole, Shaun; Frenk, Carlos

    2015-01-01

    We present a new technique for creating mock catalogues of the individual stars that make up the accreted component of stellar haloes in cosmological simulations and show how the catalogues can be used to test and interpret observational data. The catalogues are constructed from a combination of methods. A semi-analytic galaxy formation model is used to calculate the star formation history in haloes in an N-body simulation and dark matter particles are tagged with this stellar mass. The tags are converted into individual stars using a stellar population synthesis model to obtain the number density and evolutionary stage of the stars, together with a phase-space sampling method that distributes the stars while ensuring that the phase-space structure of the original N-body simulation is maintained. A set of catalogues based on the Λ cold dark matter Aquarius simulations of Milky Way mass haloes have been created and made publicly available on a website. Two example applications are discussed that demonstrate the power and flexibility of the mock catalogues. We show how the rich stellar substructure that survives in the stellar halo precludes a simple measurement of its density profile and demonstrate explicitly how pencil-beam surveys can return almost any value for the slope of the profile. We also show that localized variations in the abundance of particular types of stars, a signature of differences in the composition of stellar populations, allow streams to be easily identified.

  3. Catalogue of knowledge and skills for sleep medicine.

    PubMed

    Penzel, Thomas; Pevernagie, Dirk; Dogas, Zoran; Grote, Ludger; de Lacy, Simone; Rodenbeck, Andrea; Bassetti, Claudio; Berg, Søren; Cirignotta, Fabio; d'Ortho, Marie-Pia; Garcia-Borreguero, Diego; Levy, Patrick; Nobili, Lino; Paiva, Teresa; Peigneux, Philippe; Pollmächer, Thomas; Riemann, Dieter; Skene, Debra J; Zucconi, Marco; Espie, Colin

    2014-04-01

    Sleep medicine is evolving globally into a medical subspeciality in its own right, and in parallel, behavioural sleep medicine and sleep technology are expanding rapidly. Educational programmes are being implemented at different levels in many European countries. However, these programmes would benefit from a common, interdisciplinary curriculum. This 'catalogue of knowledge and skills' for sleep medicine is proposed, therefore, as a template for developing more standardized curricula across Europe. The Board and The Sleep Medicine Committee of the European Sleep Research Society (ESRS) have compiled the catalogue based on textbooks, standard of practice publications, systematic reviews and professional experience, validated subsequently by an online survey completed by 110 delegates specialized in sleep medicine from different European countries. The catalogue comprises 10 chapters covering physiology, pathology, diagnostic and treatment procedures to societal and organizational aspects of sleep medicine. Required levels of knowledge and skills are defined, as is a proposed workload of 60 points according to the European Credit Transfer System (ECTS). The catalogue is intended to be a basis for sleep medicine education, for sleep medicine courses and for sleep medicine examinations, serving not only physicians with a medical speciality degree, but also PhD and MSc health professionals such as clinical psychologists and scientists, technologists and nurses, all of whom may be involved professionally in sleep medicine. In the future, the catalogue will be revised in accordance with advances in the field of sleep medicine.

  4. X-ray selected stars in HRC and BHRC catalogues

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Paronyan, G. M.

    2014-12-01

    A joint HRC/BHRC Catalogue has been created based on merging of Hamburg ROSAT Catalogue (HRC) and Byurakan Hamburg ROSAT Catalogue (BHRC). Both have been made by optical identifications of X-ray sources based on low-dispersion spectra of the Hamburg Quasar Survey (HQS) using ROSAT Catalogues. As a result, the largest sample of 8132 (5341+2791) optically identified X-ray sources was created having count rate (CR) of photons ≤ 0.04 ct/s in the area with galactic latitudes |b|≤ 20° and declinations d≤ 0°.There are 4253 AGN, 492 galaxies, 1800 stars and 1587 unknown objects in the sample. All stars have been found in GSC 2.3.2, as well as most of them are in GALEX, USNO-B1.0, 2MASS and WISE catalogues. In addition, 1429 are in SDSS DR9 and 204 have SDSS spectra. For these stars we have carried out spectral classification and along with the bright stars, many new cataclysmic variables (CV), white dwarfs (WD) and late-type stars (K-M and C) have been revealed. For all stars, statistical studies of their multiwavelength properties have been made. An attempt to find a connection between the radiation fluxes in different bands for different types of sources, and identify their characteristics was made as well.

  5. Separation of metadata and bulkdata to speed DICOM tag morphing

    NASA Astrophysics Data System (ADS)

    Ismail, Mahmoud; Ning, Yu; Philbin, James

    2014-03-01

    Most medical images are archived and transmitted using the DICOM format. The DICOM information model combines image pixel data and associated metadata into a single object. It is not possible to access the metadata separately from the pixel data. However, there are important use cases that only need access to metadata, and the DICOM format increases the running time of those use cases. Tag morphing is an example of one such use case. Tag or attribute morphing includes insertion, deletion, or modification of one or more of the metadata attributes in a study. It is typically used for order reconciliation on study acquisition or to localize the Issuer of Patient ID and the Patient ID attributes when data from one Medical Record Number (MRN) domain is transferred to or displayed in a different domain. This work uses the Multi-Series DICOM (MSD) format to reduce the time required for tag morphing. The MSD format separates metadata from pixel data, and at the same time eliminates duplicate attributes. MSD stores studies using two files rather than in many single frame files typical of DICOM. The first file contains the de-duplicated study metadata, and the second contains pixel data and other bulkdata. A set of experiments were performed where metadata updates were applied to a set of DICOM studies stored in both the traditional Single Frame DICOM (SFD) format and the MSD format. The time required to perform the updates was recorded for each format. The results show that tag morphing is, on average, more than eight times faster in MSD format.

  6. Study of the star catalogue (epoch AD 1396.0) recorded in ancient Korean astronomical almanac

    NASA Astrophysics Data System (ADS)

    Jeon, Junhyeok; Lee, Yong Bok; Lee, Yong-Sam

    2015-11-01

    The study of old star catalogues provides important astrometric data. Most of the researches based on the old star catalogues were manuscript published in Europe and from Arabic/Islam. However, the old star catalogues published in East Asia did not get attention. Therefore, among the East Asian star catalogues we focus on a particular catalogue recorded in a Korean almanac. Its catalogue contains 277 stars that are positioned in a region within 10° of the ecliptic plane. The stars in the catalogue were identified using the modern Hipparcos catalogue. We identified 274 among 277 stars, which is a rate of 98.9 per cent. The catalogue records the epoch of the stars' positions as AD 1396.0. However, by using all of the identified stars we found that the initial epoch of the catalogue is AD 1363.1 ± 3.2. In conclusion, the star catalogue was compiled and edited from various older star catalogues. We assume a correlation with the Almagest by Ptolemaios. This study presents newly analysed results from the historically important astronomical data discovered in East Asia. Therefore, this star catalogue will become important data for comparison with the star catalogues published in Europe and from Arabic/Islam.

  7. Adenovirus DNA Replication

    PubMed Central

    Hoeben, Rob C.; Uil, Taco G.

    2013-01-01

    Adenoviruses have attracted much attention as probes to study biological processes such as DNA replication, transcription, splicing, and cellular transformation. More recently these viruses have been used as gene-transfer vectors and oncolytic agents. On the other hand, adenoviruses are notorious pathogens in people with compromised immune functions. This article will briefly summarize the basic replication strategy of adenoviruses and the key proteins involved and will deal with the new developments since 2006. In addition, we will cover the development of antivirals that interfere with human adenovirus (HAdV) replication and the impact of HAdV on human disease. PMID:23388625

  8. Recombination and Replication

    PubMed Central

    Syeda, Aisha H.; Hawkins, Michelle; McGlynn, Peter

    2014-01-01

    The links between recombination and replication have been appreciated for decades and it is now generally accepted that these two fundamental aspects of DNA metabolism are inseparable: Homologous recombination is essential for completion of DNA replication and vice versa. This review focuses on the roles that recombination enzymes play in underpinning genome duplication, aiding replication fork movement in the face of the many replisome barriers that challenge genome stability. These links have many conserved features across all domains of life, reflecting the conserved nature of the substrate for these reactions, DNA. PMID:25341919

  9. Reassessing the BATSE Catalogue of Terrestrial Gamma-ray Flashes

    NASA Astrophysics Data System (ADS)

    Sleinkofer, A. M.; Briggs, M. S.; Connaughton, V.

    2015-12-01

    Since Terrestrial Gamma-ray Flashes (TGFs) were discovered by the Burst and Transient Source Experiment (BATSE) on NASA's Compton Gamma-ray Observatory (CGRO) in the 1990s, other observations have increased our knowledge of TGFs. This improved understanding includes characteristics such as the distributions of geographic locations, pulse durations, pulse shapes, and pulse multiplicities. Using this post-BATSE knowledge, we reassessed the BATSE TGF catalogue(http://gammaray.nsstc.nasa.gov/batse/tgf/). Some BATSE triggers have features that can easily identify the trigger as a TGF, while others display different features that are unusual for TGFs. The BATSE triggers of the TGF catalogue were classified into five categories: TGFs, Terrestrial Electron Beams (TEBs), unusual TGFs, uncertain due to insufficient data, and TEB candidates. The triggers with unusual features will be further investigated. A table of our classifications and comments will be added to the online catalogue.

  10. Second ROSAT all-sky survey (2RXS) source catalogue

    NASA Astrophysics Data System (ADS)

    Boller, Th.; Freyberg, M. J.; Trümper, J.; Haberl, F.; Voges, W.; Nandra, K.

    2016-04-01

    Aims: We present the second ROSAT all-sky survey source catalogue, hereafter referred to as the 2RXS catalogue. This is the second publicly released ROSAT catalogue of point-like sources obtained from the ROSAT all-sky survey (RASS) observations performed with the position-sensitive proportional counter (PSPC) between June 1990 and August 1991, and is an extended and revised version of the bright and faint source catalogues. Methods: We used the latest version of the RASS processing to produce overlapping X-ray images of 6.4° × 6.4° sky regions. To create a source catalogue, a likelihood-based detection algorithm was applied to these, which accounts for the variable point-spread function (PSF) across the PSPC field of view. Improvements in the background determination compared to 1RXS were also implemented. X-ray control images showing the source and background extraction regions were generated, which were visually inspected. Simulations were performed to assess the spurious source content of the 2RXS catalogue. X-ray spectra and light curves were extracted for the 2RXS sources, with spectral and variability parameters derived from these products. Results: We obtained about 135 000 X-ray detections in the 0.1-2.4 keV energy band down to a likelihood threshold of 6.5, as adopted in the 1RXS faint source catalogue. Our simulations show that the expected spurious content of the catalogue is a strong function of detection likelihood, and the full catalogue is expected to contain about 30% spurious detections. A more conservative likelihood threshold of 9, on the other hand, yields about 71 000 detections with a 5% spurious fraction. We recommend thresholds appropriate to the scientific application. X-ray images and overlaid X-ray contour lines provide an additional user product to evaluate the detections visually, and we performed our own visual inspections to flag uncertain detections. Intra-day variability in the X-ray light curves was quantified based on the

  11. The DES Science Verification weak lensing shear catalogues

    NASA Astrophysics Data System (ADS)

    Jarvis, M.; Sheldon, E.; Zuntz, J.; Kacprzak, T.; Bridle, S. L.; Amara, A.; Armstrong, R.; Becker, M. R.; Bernstein, G. M.; Bonnett, C.; Chang, C.; Das, R.; Dietrich, J. P.; Drlica-Wagner, A.; Eifler, T. F.; Gangkofner, C.; Gruen, D.; Hirsch, M.; Huff, E. M.; Jain, B.; Kent, S.; Kirk, D.; MacCrann, N.; Melchior, P.; Plazas, A. A.; Refregier, A.; Rowe, B.; Rykoff, E. S.; Samuroff, S.; Sánchez, C.; Suchyta, E.; Troxel, M. A.; Vikram, V.; Abbott, T.; Abdalla, F. B.; Allam, S.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Clampitt, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Gerdes, D. W.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Martini, P.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nord, B.; Ogando, R.; Reil, K.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Walker, A. R.; Wechsler, R. H.

    2016-08-01

    We present weak lensing shear catalogues for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogues of 2.12 million and 3.44 million galaxies, respectively. We detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SV data. We also discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogues for the full 5-yr DES, which is expected to cover 5000 square degrees.

  12. Euclid Star Catalogue Management for the Fine Guidance Sensor

    NASA Astrophysics Data System (ADS)

    2015-09-01

    The Fine Guidance Sensor is a key element of the AOCS subsystem for the Euclid spacecraft in order to achieve the required absolute pointing accuracy and pointing stability of the telescope Line of Sight. The Fine Guidance Sensor is able to give measure of the relative attitude with respect to the first attitude acquired and the measure of the absolute attitude with respect to the inertial reference frame through the use of an on-board Star Catalogue. The presence of at least 3 star-like objects per FoV is needed to compute the attitude; considering the small FGS FoV (0.1x0.1deg) the Star Catalogue shall be complete up to visual magnitude 19 to allow the correct coverage. The paper describes the implementation of the catalogue in the FGS design and the management of the big amount of data on ground, between ground and spacecraft, and on-board.

  13. Three editions of the star catalogue of Tycho Brahe. Machine-readable versions and comparison with the modern Hipparcos Catalogue

    NASA Astrophysics Data System (ADS)

    Verbunt, F.; van Gent, R. H.

    2010-06-01

    Tycho Brahe completed his catalogue with the positions and magnitudes of 1004 fixed stars in 1598. This catalogue circulated in manuscript form. Brahe edited a shorter version with 777 stars, printed in 1602, and Kepler edited the full catalogue of 1004 stars, printed in 1627. We provide machine-readable versions of the three versions of the catalogue, describe the differences between them and briefly discuss their accuracy on the basis of comparison with modern data from the Hipparcos Catalogue. We also compare our results with earlier analyses by Dreyer (1916, Tychonis Brahe Dani Scripta Astronomica, Vol. II) and Rawlins (1993, DIO, 3, 1), finding good overall agreement. The magnitudes given by Brahe correlate well with modern values, his longitudes and latitudes have error distributions with widths of 2´, with excess numbers of stars with larger errors (as compared to Gaussian distributions), in particular for the faintest stars. Errors in positions larger than ≃10´, which comprise about 15% of the entries, are likely due to computing or copying errors. The full tables KeplerE and Variants (see Table 4) and the table with the latin descriptions of the stars are available in electronic form only at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/516/A28

  14. Towards a Next-Generation Catalogue Cross-Match Service

    NASA Astrophysics Data System (ADS)

    Pineau, F.; Boch, T.; Derriere, S.; Arches Consortium

    2015-09-01

    We have been developing in the past several catalogue cross-match tools. On one hand the CDS XMatch service (Pineau et al. 2011), able to perform basic but very efficient cross-matches, scalable to the largest catalogues on a single regular server. On the other hand, as part of the European project ARCHES1, we have been developing a generic and flexible tool which performs potentially complex multi-catalogue cross-matches and which computes probabilities of association based on a novel statistical framework. Although the two approaches have been managed so far as different tracks, the need for next generation cross-match services dealing with both efficiency and complexity is becoming pressing with forthcoming projects which will produce huge high quality catalogues. We are addressing this challenge which is both theoretical and technical. In ARCHES we generalize to N catalogues the candidate selection criteria - based on the chi-square distribution - described in Pineau et al. (2011). We formulate and test a number of Bayesian hypothesis which necessarily increases dramatically with the number of catalogues. To assign a probability to each hypotheses, we rely on estimated priors which account for local densities of sources. We validated our developments by comparing the theoretical curves we derived with the results of Monte-Carlo simulations. The current prototype is able to take into account heterogeneous positional errors, object extension and proper motion. The technical complexity is managed by OO programming design patterns and SQL-like functionalities. Large tasks are split into smaller independent pieces for scalability. Performances are achieved resorting to multi-threading, sequential reads and several tree data-structures. In addition to kd-trees, we account for heterogeneous positional errors and object's extension using M-trees. Proper-motions are supported using a modified M-tree we developed, inspired from Time Parametrized R-trees (TPR

  15. Semantic Representation of Temporal Metadata in a Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Wang, H.; Rozell, E. A.; West, P.; Zednik, S.; Fox, P. A.

    2011-12-01

    The Virtual Solar-Terrestrial Observatory (VSTO) Portal at vsto.org provides a set of guided workflows to implement use cases designed solar-terrestrial physics and upper atmospheric science. Semantics are used in VSTO to model abstract instrument and parameter classifications, providing data access to users without extended domain specific vocabularies. The temporal restrictions used in the workflows are currently possible via RESTful services made to a remote system with access to a SQL-based metadata catalog. In order to provide a greater range of temporal reasoning and search capabilities for the user, we propose an alternative architecture design for the VSTO Portal, where the temporal metadata is integrated in the domain ontology. We achieve this integration by converting temporal metadata from the headers of raw data files into RDF using the OWL-Time vocabulary. This presentation covers our work with semantic temporal metadata, including: our representation using OWL-Time, issues that we have faced in persistent storage, and performance and scalability of semantic query. We conclude with discussions of the significance semantic temporal metadata has in virtual observatories.

  16. Interoperable Solar Data and Metadata via LISIRD 3

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D. M.; Pankratz, C. K.; Snow, M. A.; Woods, T. N.

    2015-12-01

    LISIRD 3 is a major upgrade of the LASP Interactive Solar Irradiance Data Center (LISIRD), which serves several dozen space based solar irradiance and related data products to the public. Through interactive plots, LISIRD 3 provides data browsing supported by data subsetting and aggregation. Incorporating a semantically enabled metadata repository, LISIRD 3 users see current, vetted, consistent information about the datasets offered. Users can now also search for datasets based on metadata fields such as dataset type and/or spectral or temporal range. This semantic database enables metadata browsing, so users can discover the relationships between datasets, instruments, spacecraft, mission and PI. The database also enables creation and publication of metadata records in a variety of formats, such as SPASE or ISO, making these datasets more discoverable. The database also enables the possibility of a public SPARQL endpoint, making the metadata browsable in an automated fashion. LISIRD 3's data access middleware, LaTiS, provides dynamic, on demand reformatting of data and timestamps, subsetting and aggregation, and other server side functionality via a RESTful OPeNDAP compliant API, enabling interoperability between LASP datasets and many common tools. LISIRD 3's templated front end design, coupled with the uniform data interface offered by LaTiS, allows easy integration of new datasets. Consequently the number and variety of datasets offered by LISIRD has grown to encompass several dozen, with many more to come. This poster will discuss design and implementation of LISIRD 3, including tools used, capabilities enabled, and issues encountered.

  17. CyberSKA Radio Imaging Metadata and VO Compliance Engineering

    NASA Astrophysics Data System (ADS)

    Anderson, K. R.; Rosolowsky, E.; Dowler, P.

    2013-10-01

    The CyberSKA project has written a specification for the metadata encapsulation of radio astronomy data products pursuant to insertion into the VO-compliant Common Archive Observation Model (CAOM) database hosted by the Canadian Astronomy Data Centre (CADC). This specification accommodates radio FITS Image and UV Visibility data, as well as pure CASA Tables Imaging and Visibility Measurement Sets. To extract and engineer radio metadata, we have authored two software packages: metaData (v0.5.0) and mddb (v1.3). Together, these Python packages can convert all the above stated data format types into concise FITS-like header files, engineer the metadata to conform to the CAOM data model, and then insert these engineered data into the CADC database, which subsequently becomes published through the Canadian Virtual Observatory. The metaData and mddb packages have, for the first time, published ALMA imaging data on VO services. Our ongoing work aims to integrate visibility data from ALMA and the SKA into VO services and to enable user-submitted radio data to move seamlessly into the Virtual Observatory.

  18. Mercury- Distributed Metadata Management, Data Discovery and Access System

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.

    2007-12-01

    Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  19. The Million Optical - Radio/X-ray Associations (MORX) Catalogue

    NASA Astrophysics Data System (ADS)

    Flesch, Eric W.

    2016-10-01

    This automated catalogue combines all the largest published optical, radio, and X-ray sky catalogues to find probable radio/X-ray associations to optical objects, plus double radio lobes, using uniform processing against all input data. The total count is 1 002 855 optical objects so presented. Each object is displayed with J2000 astrometry, optical and radio/X-ray identifiers, red and blue photometry, and calculated probabilities and optical field solutions of the associations. This is the third and final edition of this method.

  20. The Brera Multi-scale Wavelet ROSAT HRI source catalogue

    NASA Astrophysics Data System (ADS)

    Panzera, M. R.; Campana, S.; Covino, S.; Lazzati, D.; Mignani, R. P.; Moretti, A.; Tagliaferri, G.

    2003-02-01

    We present the Brera Multi-scale Wavelet ROSAT HRI source catalogue (BMW-HRI) derived from all ROSAT HRI pointed observations with exposure times longer than 100 s available in the ROSAT public archives. The data were analyzed automatically using a wavelet detection algorithm suited to the detection and characterization of both point-like and extended sources. This algorithm is able to detect and disentangle sources in very crowded fields and/or in the presence of extended or bright sources. Images have been also visually inspected after the analysis to ensure verification. The final catalogue, derived from 4303 observations, consists of 29 089 sources detected with a detection probability of >=4.2 sigma . For each source, the primary catalogue entries provide name, position, count rate, flux and extension along with the relative errors. In addition, results of cross-correlations with existing catalogues at different wavelengths (FIRST, IRAS, 2MASS and GSC2) are also reported. Some information is available on the web via the DIANA Interface. As an external check, we compared our catalogue with the previously available ROSHRICAT catalogue (both in its short and long versions) and we were able to recover, for the short version, ~ 90% of the entries. We computed the sky coverage of the entire HRI data set by means of simulations. The complete BMW-HRI catalogue provides a sky coverage of 732 deg2 down to a limiting flux of ~ 10-12 erg s-1 cm-2 and of 10 deg2 down to ~ 10-14 erg s-1 cm-2. We were able to compute the cosmological log(N)-log(S) distribution down to a flux of =~ 1.2 x 10-14 erg s-1 cm-2. The catalogue is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/399/351

  1. Modeling DNA Replication.

    ERIC Educational Resources Information Center

    Bennett, Joan

    1998-01-01

    Recommends the use of a model of DNA made out of Velcro to help students visualize the steps of DNA replication. Includes a materials list, construction directions, and details of the demonstration using the model parts. (DDR)

  2. Abiotic self-replication.

    PubMed

    Meyer, Adam J; Ellefson, Jared W; Ellington, Andrew D

    2012-12-18

    The key to the origins of life is the replication of information. Linear polymers such as nucleic acids that both carry information and can be replicated are currently what we consider to be the basis of living systems. However, these two properties are not necessarily coupled. The ability to mutate in a discrete or quantized way, without frequent reversion, may be an additional requirement for Darwinian evolution, in which case the notion that Darwinian evolution defines life may be less of a tautology than previously thought. In this Account, we examine a variety of in vitro systems of increasing complexity, from simple chemical replicators up to complex systems based on in vitro transcription and translation. Comparing and contrasting these systems provides an interesting window onto the molecular origins of life. For nucleic acids, the story likely begins with simple chemical replication, perhaps of the form A + B → T, in which T serves as a template for the joining of A and B. Molecular variants capable of faster replication would come to dominate a population, and the development of cycles in which templates could foster one another's replication would have led to increasingly complex replicators and from thence to the initial genomes. The initial genomes may have been propagated by RNA replicases, ribozymes capable of joining oligonucleotides and eventually polymerizing mononucleotide substrates. As ribozymes were added to the genome to fill gaps in the chemistry necessary for replication, the backbone of a putative RNA world would have emerged. It is likely that such replicators would have been plagued by molecular parasites, which would have been passively replicated by the RNA world machinery without contributing to it. These molecular parasites would have been a major driver for the development of compartmentalization/cellularization, as more robust compartments could have outcompeted parasite-ridden compartments. The eventual outsourcing of metabolic

  3. Abiotic self-replication.

    PubMed

    Meyer, Adam J; Ellefson, Jared W; Ellington, Andrew D

    2012-12-18

    The key to the origins of life is the replication of information. Linear polymers such as nucleic acids that both carry information and can be replicated are currently what we consider to be the basis of living systems. However, these two properties are not necessarily coupled. The ability to mutate in a discrete or quantized way, without frequent reversion, may be an additional requirement for Darwinian evolution, in which case the notion that Darwinian evolution defines life may be less of a tautology than previously thought. In this Account, we examine a variety of in vitro systems of increasing complexity, from simple chemical replicators up to complex systems based on in vitro transcription and translation. Comparing and contrasting these systems provides an interesting window onto the molecular origins of life. For nucleic acids, the story likely begins with simple chemical replication, perhaps of the form A + B → T, in which T serves as a template for the joining of A and B. Molecular variants capable of faster replication would come to dominate a population, and the development of cycles in which templates could foster one another's replication would have led to increasingly complex replicators and from thence to the initial genomes. The initial genomes may have been propagated by RNA replicases, ribozymes capable of joining oligonucleotides and eventually polymerizing mononucleotide substrates. As ribozymes were added to the genome to fill gaps in the chemistry necessary for replication, the backbone of a putative RNA world would have emerged. It is likely that such replicators would have been plagued by molecular parasites, which would have been passively replicated by the RNA world machinery without contributing to it. These molecular parasites would have been a major driver for the development of compartmentalization/cellularization, as more robust compartments could have outcompeted parasite-ridden compartments. The eventual outsourcing of metabolic

  4. DNA replication origins.

    PubMed

    Leonard, Alan C; Méchali, Marcel

    2013-10-01

    The onset of genomic DNA synthesis requires precise interactions of specialized initiator proteins with DNA at sites where the replication machinery can be loaded. These sites, defined as replication origins, are found at a few unique locations in all of the prokaryotic chromosomes examined so far. However, replication origins are dispersed among tens of thousands of loci in metazoan chromosomes, thereby raising questions regarding the role of specific nucleotide sequences and chromatin environment in origin selection and the mechanisms used by initiators to recognize replication origins. Close examination of bacterial and archaeal replication origins reveals an array of DNA sequence motifs that position individual initiator protein molecules and promote initiator oligomerization on origin DNA. Conversely, the need for specific recognition sequences in eukaryotic replication origins is relaxed. In fact, the primary rule for origin selection appears to be flexibility, a feature that is modulated either by structural elements or by epigenetic mechanisms at least partly linked to the organization of the genome for gene expression.

  5. Replication of lightweight mirrors

    NASA Astrophysics Data System (ADS)

    Chen, Ming Y.; Matson, Lawrence E.; Lee, Heedong; Chen, Chenggang

    2009-08-01

    The fabrication of lightweight mirror assemblages via a replication technique offers great potential for eliminating the high cost and schedule associated with the grinding and polishing steps needed for conventional glass or SiC mirrors. A replication mandrel is polished to an inverse figure shape and to the desired finish quality. It is then, coated with a release layer, the appropriate reflective layer, and followed by a laminate for coefficient of thermal expansion (CTE) tailorability and strength. This optical membrane is adhered to a mirror structural substrate with a low shrinkage, CTE tailored adhesive. Afterwards, the whole assembly is separated from the mandrel. The mandrel is then cleaned and reused for the next replication run. The ultimate goal of replication is to preserve the surface finish and figure of the optical membrane upon its release from the mandrel. Successful replication requires a minimization of the residual stresses within the optical coating stack, the curing stresses from the adhesive and the thermal stress resulting from CTE mismatch between the structural substrate, the adhesive, and the optical membrane. In this paper, the results on replicated trials using both metal/metal and ceramic/ceramic laminates adhered to light weighted structural substrates made from syntactic foams (both inorganic and organic) will be discussed.

  6. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file

  7. MATCH: Metadata Access Tool for Climate and Health Datasets

    DOE Data Explorer

    MATCH is a searchable clearinghouse of publicly available Federal metadata (i.e. data about data) and links to datasets. Most metadata on MATCH pertain to geospatial data sets ranging from local to global scales. The goals of MATCH are to: 1) Provide an easily accessible clearinghouse of relevant Federal metadata on climate and health that will increase efficiency in solving research problems; 2) Promote application of research and information to understand, mitigate, and adapt to the health effects of climate change; 3) Facilitate multidirectional communication among interested stakeholders to inform and shape Federal research directions; 4) Encourage collaboration among traditional and non-traditional partners in development of new initiatives to address emerging climate and health issues. [copied from http://match.globalchange.gov/geoportal/catalog/content/about.page

  8. Principles of metadata organization at the ENCODE data coordination center.

    PubMed

    Hong, Eurie L; Sloan, Cricket A; Chan, Esther T; Davidson, Jean M; Malladi, Venkat S; Strattan, J Seth; Hitz, Benjamin C; Gabdank, Idan; Narayanan, Aditi K; Ho, Marcus; Lee, Brian T; Rowe, Laurence D; Dreszer, Timothy R; Roe, Greg R; Podduturi, Nikhil R; Tanaka, Forrest; Hilton, Jason A; Cherry, J Michael

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) Data Coordinating Center (DCC) is responsible for organizing, describing and providing access to the diverse data generated by the ENCODE project. The description of these data, known as metadata, includes the biological sample used as input, the protocols and assays performed on these samples, the data files generated from the results and the computational methods used to analyze the data. Here, we outline the principles and philosophy used to define the ENCODE metadata in order to create a metadata standard that can be applied to diverse assays and multiple genomic projects. In addition, we present how the data are validated and used by the ENCODE DCC in creating the ENCODE Portal (https://www.encodeproject.org/). Database URL: www.encodeproject.org.

  9. The Challenges in Metadata Management: 20+ Years of ESO Data

    NASA Astrophysics Data System (ADS)

    Vera, I.; Da Rocha, C.; Dobrzycki, A.; Micol, A.; Vuong, M.

    2015-09-01

    The European Southern Observatory Science Archive Facility has been in operations for more than 20 years. It contains data produced by ESO telescopes as well as the metadata needed for characterizing and distributing those data. This metadata is used to build the different archive services provided by the Archive. Over these years, services have been added, modified or even decommissioned creating a cocktail of new, evolved and legacy data systems. The challenge for the Archive is to harmonize the differences of those data systems to provide the community with a homogeneous experience when using ESO data. In this paper, we present ESO experience in three particular challenging areas. First discussion is dedicated to the problem of metadata quality over the time, second discusses how to integrate obsolete data models on the current services and finally we will present the challenges of ever growing databases. We describe our experience dealing with those issues and the solutions adopted to mitigate them.

  10. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie

    2006-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://qcrnd.nasa.qov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage. The directory also offers data holders a means to post and search their data through customized portals, i.e. online customized subset metadata directories. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1994. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as a coordinating node of the International Directory Network (IDN), has been active at the international, regional and national level for many years through its involvement with the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and partnerships (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP), sharing experience, knowledge related to metadata and/or data management and interoperability.

  11. Data warehousing, metadata, and the World Wide Web

    SciTech Connect

    Yow, T.G.; Smith, A.W.; Daugherty, P.F.

    1997-04-16

    The connection between data warehousing and the metadata. used to catalog and locate warehouse data is obvious, but what is the connection between data warehousing, metadata, and the World Wide Web (WWW)? Specifically, the WWW can be used to allow users to search metadata (data about the data) and retrieve data from a warehouse database. In addition, the Internet/Intranet can be used to manage the metadata in archive databases and to streamline the database administration functions of a large archive center. The Oak Ridge National Laboratory`s (ORNL`s) Distributed Active Archive Center (DAAC) is a data archive and distribution center for the National Air and Space Administration`s (NASA`s) Earth Observing System Data and Information System (EOSDIS); the ORNL DAAC provides access to tabular and imagery datasets used in ecological and environmental research. To support this effort, we have taken advantage of the rather unique and user-friendly features of the WWW to (1) allow users to search for and download the data we archive and (2) provide DAAC developers with effective metadata and data management tools. In particular, the ORNL DAAC has developed the Biogeochemical Information Ordering Management Environment (BIOME), a WWW search-and-order system, as well as a WWW-based database administrator`s (DBA`s) tool suite designed to assist the site`s DBA in the management of archive metadata and databases and several other DBA functions that are essential to site management. This paper is a case study of how the ORNL DAAC uses the WWW to both manage data and allow access to its data warehouse.

  12. In Interactive, Web-Based Approach to Metadata Authoring

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    NASA's Global Change Master Directory (GCMD) serves a growing number of users by assisting the scientific community in the discovery of and linkage to Earth science data sets and related services. The GCMD holds over 8000 data set descriptions in Directory Interchange Format (DIF) and 200 data service descriptions in Service Entry Resource Format (SERF), encompassing the disciplines of geology, hydrology, oceanography, meteorology, and ecology. Data descriptions also contain geographic coverage information, thus allowing researchers to discover data pertaining to a particular geographic location, as well as subject of interest. The GCMD strives to be the preeminent data locator for world-wide directory level metadata. In this vein, scientists and data providers must have access to intuitive and efficient metadata authoring tools. Existing GCMD tools are not currently attracting. widespread usage. With usage being the prime indicator of utility, it has become apparent that current tools must be improved. As a result, the GCMD has released a new suite of web-based authoring tools that enable a user to create new data and service entries, as well as modify existing data entries. With these tools, a more interactive approach to metadata authoring is taken, as they feature a visual "checklist" of data/service fields that automatically update when a field is completed. In this way, the user can quickly gauge which of the required and optional fields have not been populated. With the release of these tools, the Earth science community will be further assisted in efficiently creating quality data and services metadata. Keywords: metadata, Earth science, metadata authoring tools

  13. Applications of the LBA-ECO Metadata Warehouse

    NASA Astrophysics Data System (ADS)

    Wilcox, L.; Morrell, A.; Griffith, P. C.

    2006-05-01

    The LBA-ECO Project Office has developed a system to harvest and warehouse metadata resulting from the Large-Scale Biosphere Atmosphere Experiment in Amazonia. The harvested metadata is used to create dynamically generated reports, available at www.lbaeco.org, which facilitate access to LBA-ECO datasets. The reports are generated for specific controlled vocabulary terms (such as an investigation team or a geospatial region), and are cross-linked with one another via these terms. This approach creates a rich contextual framework enabling researchers to find datasets relevant to their research. It maximizes data discovery by association and provides a greater understanding of the scientific and social context of each dataset. For example, our website provides a profile (e.g. participants, abstract(s), study sites, and publications) for each LBA-ECO investigation. Linked from each profile is a list of associated registered dataset titles, each of which link to a dataset profile that describes the metadata in a user-friendly way. The dataset profiles are generated from the harvested metadata, and are cross-linked with associated reports via controlled vocabulary terms such as geospatial region. The region name appears on the dataset profile as a hyperlinked term. When researchers click on this link, they find a list of reports relevant to that region, including a list of dataset titles associated with that region. Each dataset title in this list is hyperlinked to its corresponding dataset profile. Moreover, each dataset profile contains hyperlinks to each associated data file at its home data repository and to publications that have used the dataset. We also use the harvested metadata in administrative applications to assist quality assurance efforts. These include processes to check for broken hyperlinks to data files, automated emails that inform our administrators when critical metadata fields are updated, dynamically generated reports of metadata records that link

  14. Eighteenth Century Short Title Catalogue on CD-ROM.

    ERIC Educational Resources Information Center

    Richey, Debora J.

    The Eighteenth Century Short Title Catalogue (ESTC) on compact disc provides access to nearly 300,000 printed materials from Britain and the British colonies from 1701 to 1800. The file contains a wide variety of materials (laws, almanacs, posters, catalogs, directories, verses, monographs, advertisements, and flyers) in all languages, and covers…

  15. Modelling and Implementation of Catalogue Cards Using FreeMarker

    ERIC Educational Resources Information Center

    Radjenovic, Jelen; Milosavljevic, Branko; Surla, Dusan

    2009-01-01

    Purpose: The purpose of this paper is to report on a study involving the specification (using Unified Modelling Language (UML) 2.0) of information requirements and implementation of the software components for generating catalogue cards. The implementation in a Java environment is developed using the FreeMarker software.…

  16. Restful Implementation of Catalogue Service for Geospatial Data Provenance

    NASA Astrophysics Data System (ADS)

    Jiang, L. C.; Yue, P.; Lu, X. C.

    2013-10-01

    Provenance, also known as lineage, is important in understanding the derivation history of data products. Geospatial data provenance helps data consumers to evaluate the quality and reliability of geospatial data. In a service-oriented environment, where data are often consumed or produced by distributed services, provenance could be managed by following the same service-oriented paradigm. The Open Geospatial Consortium (OGC) Catalogue Service for the Web (CSW) is used for the registration and query of geospatial data provenance by extending ebXML Registry Information Model (ebRIM). Recent advance of the REpresentational State Transfer (REST) paradigm has shown great promise for the easy integration of distributed resources. RESTful Web Service aims to provide a standard way for Web clients to communicate with servers based on REST principles. The existing approach for provenance catalogue service could be improved by adopting the RESTful design. This paper presents the design and implementation of a catalogue service for geospatial data provenance following RESTful architecture style. A middleware named REST Converter is added on the top of the legacy catalogue service to support a RESTful style interface. The REST Converter is composed of a resource request dispatcher and six resource handlers. A prototype service is developed to demonstrate the applicability of the approach.

  17. Catalogue of the Lichenized and Lichenicolous Fungi of Montenegro

    PubMed Central

    Knežević, Branka; Mayrhofer, Helmut

    2011-01-01

    Summary The catalogue is based on a comprehensive evaluation of 169 published sources. The lichen mycota as currently known from Montenegro includes 681 species (with eight subspecies, nine varieties and one form) of lichenized fungi, 12 species of lichenicolous fungi, and nine non-lichenized fungi traditionally included in lichenological literature. PMID:21423858

  18. A reference gene catalogue of the pig gut microbiome.

    PubMed

    Xiao, Liang; Estellé, Jordi; Kiilerich, Pia; Ramayo-Caldas, Yuliaxis; Xia, Zhongkui; Feng, Qiang; Liang, Suisha; Pedersen, Anni Øyan; Kjeldsen, Niels Jørgen; Liu, Chuan; Maguin, Emmanuelle; Doré, Joël; Pons, Nicolas; Le Chatelier, Emmanuelle; Prifti, Edi; Li, Junhua; Jia, Huijue; Liu, Xin; Xu, Xun; Ehrlich, Stanislav D; Madsen, Lise; Kristiansen, Karsten; Rogel-Gaillard, Claire; Wang, Jun

    2016-09-19

    The pig is a major species for livestock production and is also extensively used as the preferred model species for analyses of a wide range of human physiological functions and diseases(1). The importance of the gut microbiota in complementing the physiology and genome of the host is now well recognized(2). Knowledge of the functional interplay between the gut microbiota and host physiology in humans has been advanced by the human gut reference catalogue(3,4). Thus, establishment of a comprehensive pig gut microbiome gene reference catalogue constitutes a logical continuation of the recently published pig genome(5). By deep metagenome sequencing of faecal DNA from 287 pigs, we identified 7.7 million non-redundant genes representing 719 metagenomic species. Of the functional pathways found in the human catalogue, 96% are present in the pig catalogue, supporting the potential use of pigs for biomedical research. We show that sex, age and host genetics are likely to influence the pig gut microbiome. Analysis of the prevalence of antibiotic resistance genes demonstrated the effect of eliminating antibiotics from animal diets and thereby reducing the risk of spreading antibiotic resistance associated with farming systems.

  19. A reference gene catalogue of the pig gut microbiome.

    PubMed

    Xiao, Liang; Estellé, Jordi; Kiilerich, Pia; Ramayo-Caldas, Yuliaxis; Xia, Zhongkui; Feng, Qiang; Liang, Suisha; Pedersen, Anni Øyan; Kjeldsen, Niels Jørgen; Liu, Chuan; Maguin, Emmanuelle; Doré, Joël; Pons, Nicolas; Le Chatelier, Emmanuelle; Prifti, Edi; Li, Junhua; Jia, Huijue; Liu, Xin; Xu, Xun; Ehrlich, Stanislav D; Madsen, Lise; Kristiansen, Karsten; Rogel-Gaillard, Claire; Wang, Jun

    2016-01-01

    The pig is a major species for livestock production and is also extensively used as the preferred model species for analyses of a wide range of human physiological functions and diseases(1). The importance of the gut microbiota in complementing the physiology and genome of the host is now well recognized(2). Knowledge of the functional interplay between the gut microbiota and host physiology in humans has been advanced by the human gut reference catalogue(3,4). Thus, establishment of a comprehensive pig gut microbiome gene reference catalogue constitutes a logical continuation of the recently published pig genome(5). By deep metagenome sequencing of faecal DNA from 287 pigs, we identified 7.7 million non-redundant genes representing 719 metagenomic species. Of the functional pathways found in the human catalogue, 96% are present in the pig catalogue, supporting the potential use of pigs for biomedical research. We show that sex, age and host genetics are likely to influence the pig gut microbiome. Analysis of the prevalence of antibiotic resistance genes demonstrated the effect of eliminating antibiotics from animal diets and thereby reducing the risk of spreading antibiotic resistance associated with farming systems. PMID:27643971

  20. VizieR Online Data Catalog: MEXSAS catalogue (Vagnetti+, 2016)

    NASA Astrophysics Data System (ADS)

    Vagnetti, F.; Middei, R.; Antonucci, M.; Paolillo, M.; Serafinelli, R.

    2016-08-01

    We present the catalog of the Multi-Epoch XMM Serendipitous AGN Sample (MEXSAS), extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue (XMMSSC-DR5) and cross-matched with the Sloan Digital Sky Survey Quasar catalogs DR7Q and DR12Q. It contains 2700 repeatedly observed AGN, with corrected excess variance information. (1 data file).

  1. The BMW-Chandra survey. Serendipitous Source Catalogue

    NASA Astrophysics Data System (ADS)

    Romano, P.; Mignani, R. P.; Campana, S.; Moretti, A.; Panzera, M. R.; Tagliaferri, G.; Mottini, M.

    2009-07-01

    We present the BMW-Chandra source catalogue derived from Chandra ACIS-I observations (exposure time > 10ks) public as of March 2003 by using a wavelet detection algorithm (Lazzati et al. 1999; Campana et al. 1999). The catalogue contains a total of 21325 sources, 16758 of which are serendipitous. Our sky coverage in the soft band (0.5-2keV, S/N=3) is ~ 8 deg2 for FX ≥ 10-13 erg cm-2 s-1, and ~ 2 deg2 for FX ≥ 10-15 erg cm-2 s-1. The catalogue contains information on positions, count rates (and errors) in three energy bands (total, 0.5-7keV; soft, 0.5-2keV; and hard, 2-7keV), and in four additional energy bands, SB1 (0.5-1keV), SB2 (1-2keV), HB1 (2-4keV), and HB2 (4-7keV), as well as information on the source extension, and cross-matches with the FIRST, IRAS, 2MASS, and GSC-2 catalogues.

  2. Constructing mock catalogues for the REFLEX II galaxy cluster sample

    NASA Astrophysics Data System (ADS)

    Balaguera-Antolínez, A.; Sánchez, Ariel G.; Böhringer, H.; Collins, C.

    2012-09-01

    We describe the construction of a suite of galaxy cluster mock catalogues from N-body simulations, based on the properties of the new ROSAT-ESO Flux Limited X-Ray (REFLEX II) galaxy cluster catalogue. Our procedure is based on the measurements of the cluster abundance, and involves the calibration of the underlying scaling relation linking the mass of dark matter haloes to the cluster X-ray luminosity determined in the ROSAT energy band 0.1-2.4 keV. In order to reproduce the observed abundance in the luminosity range probed by the REFLEX II X-ray luminosity function [0.01 < LX/(1044 erg s-1 h-2) < 10], a mass-X-ray luminosity relation deviating from a simple power law is required. We discuss the dependence of the calibration of this scaling relation on the X-ray luminosity and the definition of halo masses and analyse the one- and two-point statistical properties of the mock catalogues. Our set of mock catalogues provides samples with self-calibrated scaling relations of galaxy clusters together with inherent properties of flux-limited surveys. This makes them a useful tool to explore different systematic effects and statistical methods involved in constraining both astrophysical and cosmological information from present and future galaxy cluster surveys.

  3. 12. Photocopy of photograph (from Catalogue of Drugs, Chemicals, Proprietary ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    12. Photocopy of photograph (from Catalogue of Drugs, Chemicals, Proprietary Medicines, Pharmaceutical Preparations, Druggists' Sundries, Etc. Portland, ME: Cook, Everett, and Pennell, 1896.) ca. 1896, photographer unknown 'MAIN OFFICE AND COUNTING ROOM' - Woodman Building, 140 Middle Street, Portland, Cumberland County, ME

  4. 11. Photocopy of photograph (from Catalogue of Drugs, Chemicals, Proprietary ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Photocopy of photograph (from Catalogue of Drugs, Chemicals, Proprietary Medicines, Pharmaceutical Preparations, Druggists' Sundries, Etc. Portland, ME: Cook, Everett, and Pennell, 1896.) ca. 1896, photographer unknown 'SECTION OF MAIN FLOOR AND ORDER DEPARTMENT' - Woodman Building, 140 Middle Street, Portland, Cumberland County, ME

  5. A novel framework for assessing metadata quality in epidemiological and public health research settings

    PubMed Central

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly. PMID:27570670

  6. The Royal Society Catalogue as an Index to Nineteenth Century American Science

    ERIC Educational Resources Information Center

    Elliott, Clark A.

    1970-01-01

    The Royal Society Catalogue of Scientific Papers is investigated in terms of its coverage of American science journals and American scientists. The journals indexed by the catalogue are compared to other standard lists of journals from the nineteenth century. (Author)

  7. Modeling DNA Replication Intermediates

    SciTech Connect

    Broyde, S.; Roy, D.; Shapiro, R.

    1997-06-01

    While there is now available a great deal of information on double stranded DNA from X-ray crystallography, high resolution NMR and computer modeling, very little is known about structures that are representative of the DNA core of replication intermediates. DNA replication occurs at a single strand/double strand junction and bulged out intermediates near the junction can lead to frameshift mutations. The single stranded domains are particularly challenging. Our interest is focused on strategies for modeling the DNA of these types of replication intermediates. Modeling such structures presents special problems in addressing the multiple minimum problem and in treating the electrostatic component of the force field. We are testing a number of search strategies for locating low energy structures of these types and we are also investigating two different distance dependent dielectric functions in the coulombic term of the force field. We are studying both unmodified DNA and DNA damaged by aromatic amines, carcinogens present in the environment in tobacco smoke, barbecued meats and automobile exhaust. The nature of the structure adopted by the carcinogen modified DNA at the replication fork plays a key role in determining whether the carcinogen will cause a mutation during replication that can initiate the carcinogenic process. In the present work results are presented for unmodified DNA.

  8. The SuperCOSMOS all-sky galaxy catalogue

    NASA Astrophysics Data System (ADS)

    Peacock, J. A.; Hambly, N. C.; Bilicki, M.; MacGillivray, H. T.; Miller, L.; Read, M. A.; Tritton, S. B.

    2016-10-01

    We describe the construction of an all-sky galaxy catalogue, using SuperCOSMOS scans of Schmidt photographic plates from the UK Schmidt Telescope and Second Palomar Observatory Sky Survey. The photographic photometry is calibrated using Sloan Digital Sky Survey data, with results that are linear to 2 per cent or better. All-sky photometric uniformity is achieved by matching plate overlaps and also by requiring homogeneity in optical-to-2MASS colours, yielding zero-points that are uniform to 0.03 mag or better. The typical AB depths achieved are BJ < 21, RF < 19.5 and IN < 18.5, with little difference between hemispheres. In practice, the IN plates are shallower than the BJ and RF plates, so for most purposes we advocate the use of a catalogue selected in these two latter bands. At high Galactic latitudes, this catalogue is approximately 90 per cent complete with 5 per cent stellar contamination; we quantify how the quality degrades towards the Galactic plane. At low latitudes, there are many spurious galaxy candidates resulting from stellar blends: these approximately match the surface density of true galaxies at |b| = 30°. Above this latitude, the catalogue limited in BJ and RF contains in total about 20 million galaxy candidates, of which 75 per cent are real. This contamination can be removed, and the sky coverage extended, by matching with additional data sets. This SuperCOSMOS catalogue has been matched with 2MASS and with WISE, yielding quasi-all-sky samples of respectively 1.5 million and 18.5 million galaxies, to median redshifts of 0.08 and 0.20. This legacy data set thus continues to offer a valuable resource for large-angle cosmological investigations.

  9. A deep optical/near-infrared catalogue of Serpens

    NASA Astrophysics Data System (ADS)

    Spezzi, L.; Merín, B.; Oliveira, I.; van Dishoeck, E. F.; Brown, J. M.

    2010-04-01

    We present a deep optical/near-infrared imaging survey of the Serpens molecular cloud. This survey constitutes the complementary optical data to the Spitzer “Core To Disk” (c2d) Legacy survey in this cloud. The survey was conducted using the wide field camera at the Isaac Newton Telescope. About 0.96 square degrees were imaged in the R and Z filters, covering the entire region where most of the young stellar objects identified by the c2d survey are located. The 26 524 point-like sources were detected in both R and Z bands down to R ≈ 24.5 mag and Z ≈ 23 mag with a signal-to-noise ratio better than 3. The 95% completeness limit of our catalogue corresponds to 0.04 M⊙ for members of the Serpens star-forming region (age 2 Myr and distance 260 pc) in the absence of extinction. Adopting the typical extinction of the observed area (AV ≈ 7 mag), we estimate a 95% completeness level down to M ≈ 0.1 M_⊙. The astrometric accuracy of our catalogue is 0.4 arcsec with respect to the 2MASS catalogue. Our final catalogue contains J2000 celestial coordinates, magnitudes in the R and Z bands calibrated to the SDSS photometric system and, where possible, JHKS magnitudes from 2MASS for sources in 0.96 square degrees in the direction of Serpens. This data product has already been used within the frame of the c2d Spitzer Legacy Project analysis in Serpens to study the star/disk formation and evolution in this cloud. Here we use it to obtain new indications of the disk-less population in Serpens. Catalogue (in VizieR) is only available in electronic form at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr/viz-bin/Cat

  10. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  11. Turning Data into Information: Assessing and Reporting GIS Metadata Integrity Using Integrated Computing Technologies

    ERIC Educational Resources Information Center

    Mulrooney, Timothy J.

    2009-01-01

    A Geographic Information System (GIS) serves as the tangible and intangible means by which spatially related phenomena can be created, analyzed and rendered. GIS metadata serves as the formal framework to catalog information about a GIS data set. Metadata is independent of the encoded spatial and attribute information. GIS metadata is a subset of…

  12. A Model for the Creation of Human-Generated Metadata within Communities

    ERIC Educational Resources Information Center

    Brasher, Andrew; McAndrew, Patrick

    2005-01-01

    This paper considers situations for which detailed metadata descriptions of learning resources are necessary, and focuses on human generation of such metadata. It describes a model which facilitates human production of good quality metadata by the development and use of structured vocabularies. Using examples, this model is applied to single and…

  13. Avian Retroviral Replication

    PubMed Central

    Justice, James; Beemon, Karen L.

    2013-01-01

    Avian retroviruses have undergone intense study since the beginning of the 20th century. They were originally identified as cancer-inducing filterable agents in chicken neoplasms. Since their discovery, the study of these simple retroviruses has contributed greatly to our understanding of retroviral replication and cancer. Avian retroviruses are continuing to evolve and have great economic importance in the poultry industry worldwide. The aim of this review is to provide a broad overview of the genome, pathology, and replication of avian retroviruses. Notable gaps in our current knowledge are highlighted, and areas where avian retroviruses differ from other retrovirus are also emphasized. PMID:24011707

  14. Replicated Composite Optics Development

    NASA Technical Reports Server (NTRS)

    Engelhaupt, Darell

    1997-01-01

    Advanced optical systems for applications such as grazing incidence Wolter I x-ray mirror assemblies require extraordinary mirror surfaces in ten-ns of fine surface finish and figure. The impeccable mirror surface is on the inside of the rotational mirror form. One practical method of producing devices with these requirements is to first fabricate an exterior surface for the optical device then replicate that surface to have the inverse component with lightweight characteristics. The replicate optic is not better than the master or mandrel from which it is made. This task is a continuance of previous studies to identify methods and materials for forming these extremely low roughness optical components.

  15. Mining and Utilizing Dataset Relevancy from Oceanographic Dataset (MUDROD) Metadata, Usage Metrics, and User Feedback to Improve Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Jiang, Y.

    2015-12-01

    Oceanographic resource discovery is a critical step for developing ocean science applications. With the increasing number of resources available online, many Spatial Data Infrastructure (SDI) components (e.g. catalogues and portals) have been developed to help manage and discover oceanographic resources. However, efficient and accurate resource discovery is still a big challenge because of the lack of data relevancy information. In this article, we propose a search engine framework for mining and utilizing dataset relevancy from oceanographic dataset metadata, usage metrics, and user feedback. The objective is to improve discovery accuracy of oceanographic data and reduce time for scientist to discover, download and reformat data for their projects. Experiments and a search example show that the propose engine helps both scientists and general users search for more accurate results with enhanced performance and user experience through a user-friendly interface.

  16. Big Earth Data Initiative: Metadata Improvement: Case Studies

    NASA Technical Reports Server (NTRS)

    Kozimor, John; Habermann, Ted; Farley, John

    2016-01-01

    Big Earth Data Initiative (BEDI) The Big Earth Data Initiative (BEDI) invests in standardizing and optimizing the collection, management and delivery of U.S. Government's civil Earth observation data to improve discovery, access use, and understanding of Earth observations by the broader user community. Complete and consistent standard metadata helps address all three goals.

  17. Scalable PGAS Metadata Management on Extreme Scale Systems

    SciTech Connect

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    2013-05-16

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributed data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.

  18. Syndicating Rich Bibliographic Metadata Using MODS and RSS

    ERIC Educational Resources Information Center

    Ashton, Andrew

    2008-01-01

    Many libraries use RSS to syndicate information about their collections to users. A survey of 65 academic libraries revealed their most common use for RSS is to disseminate information about library holdings, such as lists of new acquisitions. Even though typical RSS feeds are ill suited to the task of carrying rich bibliographic metadata, great…

  19. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org.

  20. Training and Best Practice Guidelines: Implications for Metadata Creation

    ERIC Educational Resources Information Center

    Chuttur, Mohammad Y.

    2012-01-01

    In response to the rapid development of digital libraries over the past decade, researchers have focused on the use of metadata as an effective means to support resource discovery within online repositories. With the increasing involvement of libraries in digitization projects and the growing number of institutional repositories, it is anticipated…

  1. Metadata Harvesting in Regional Digital Libraries in the PIONIER Network

    ERIC Educational Resources Information Center

    Mazurek, Cezary; Stroinski, Maciej; Werla, Marcin; Weglarz, Jan

    2006-01-01

    Purpose: The paper aims to present the concept of the functionality of metadata harvesting for regional digital libraries, based on the OAI-PMH protocol. This functionality is a part of regional digital libraries platform created in Poland. The platform was required to reach one of main objectives of the Polish PIONIER Programme--to enrich the…

  2. Metadata management for high content screening in OMERO.

    PubMed

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R

    2016-03-01

    High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  3. Metadata and Annotations for Multi-scale Electrophysiological Data

    PubMed Central

    Bower, Mark R.; Stead, Matt; Brinkmann, Benjamin H.; Dufendach, Kevin; Worrell, Gregory A.

    2010-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for “big data” files is presented. PMID:19964266

  4. Metadata management for high content screening in OMERO

    PubMed Central

    Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.

    2016-01-01

    High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368

  5. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    ERIC Educational Resources Information Center

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  6. MMI's Metadata and Vocabulary Solutions: 10 Years and Growing

    NASA Astrophysics Data System (ADS)

    Graybeal, J.; Gayanilo, F.; Rueda-Velasquez, C. A.

    2014-12-01

    The Marine Metadata Interoperability project (http://marinemetadata.org) held its public opening at AGU's 2004 Fall Meeting. For 10 years since that debut, the MMI guidance and vocabulary sites have served over 100,000 visitors, with 525 community members and continuous Steering Committee leadership. Originally funded by the National Science Foundation, over the years multiple organizations have supported the MMI mission: "Our goal is to support collaborative research in the marine science domain, by simplifying the incredibly complex world of metadata into specific, straightforward guidance. MMI encourages scientists and data managers at all levels to apply good metadata practices from the start of a project, by providing the best guidance and resources for data management, and developing advanced metadata tools and services needed by the community." Now hosted by the Harte Research Institute at Texas A&M University at Corpus Christi, MMI continues to provide guidance and services to the community, and is planning for marine science and technology needs for the next 10 years. In this presentation we will highlight our major accomplishments, describe our recent achievements and imminent goals, and propose a vision for improving marine data interoperability for the next 10 years, including Ontology Registry and Repository (http://mmisw.org/orr) advancements and applications (http://mmisw.org/cfsn).

  7. ATLAS Metadata Infrastructure Evolution for Run 2 and Beyond

    NASA Astrophysics Data System (ADS)

    van Gemmeren, P.; Cranshaw, J.; Malon, D.; Vaniachine, A.

    2015-12-01

    ATLAS developed and employed for Run 1 of the Large Hadron Collider a sophisticated infrastructure for metadata handling in event processing jobs. This infrastructure profits from a rich feature set provided by the ATLAS execution control framework, including standardized interfaces and invocation mechanisms for tools and services, segregation of transient data stores with concomitant object lifetime management, and mechanisms for handling occurrences asynchronous to the control framework's state machine transitions. This metadata infrastructure is evolving and being extended for Run 2 to allow its use and reuse in downstream physics analyses, analyses that may or may not utilize the ATLAS control framework. At the same time, multiprocessing versions of the control framework and the requirements of future multithreaded frameworks are leading to redesign of components that use an incident-handling approach to asynchrony. The increased use of scatter-gather architectures, both local and distributed, requires further enhancement of metadata infrastructure in order to ensure semantic coherence and robust bookkeeping. This paper describes the evolution of ATLAS metadata infrastructure for Run 2 and beyond, including the transition to dual-use tools—tools that can operate inside or outside the ATLAS control framework—and the implications thereof. It further examines how the design of this infrastructure is changing to accommodate the requirements of future frameworks and emerging event processing architectures.

  8. ONEMercury: Towards Automatic Annotation of Earth Science Metadata

    NASA Astrophysics Data System (ADS)

    Tuarob, S.; Pouchard, L. C.; Noy, N.; Horsburgh, J. S.; Palanisamy, G.

    2012-12-01

    Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

  9. Data, Big Data, and Metadata in Anesthesiology.

    PubMed

    Levin, Matthew A; Wanderer, Jonathan P; Ehrenfeld, Jesse M

    2015-12-01

    The last decade has seen an explosion in the growth of digital data. Since 2005, the total amount of digital data created or replicated on all platforms and devices has been doubling every 2 years, from an estimated 132 exabytes (132 billion gigabytes) in 2005 to 4.4 zettabytes (4.4 trillion gigabytes) in 2013, and a projected 44 zettabytes (44 trillion gigabytes) in 2020. This growth has been driven in large part by the rise of social media along with more powerful and connected mobile devices, with an estimated 75% of information in the digital universe generated by individuals rather than entities. Transactions and communications including payments, instant messages, Web searches, social media updates, and online posts are all becoming part of a vast pool of data that live "in the cloud" on clusters of servers located in remote data centers. The amount of accumulating data has become so large that it has given rise to the term Big Data. In many ways, Big Data is just a buzzword, a phrase that is often misunderstood and misused to describe any sort of data, no matter the size or complexity. However, there is truth to the assertion that some data sets truly require new management and analysis techniques. PMID:26579664

  10. Serving Fisheries and Ocean Metadata to Communities Around the World

    NASA Technical Reports Server (NTRS)

    Meaux, Melanie F.

    2007-01-01

    NASA's Global Change Master Directory (GCMD) assists the oceanographic community in the discovery, access, and sharing of scientific data by serving on-line fisheries and ocean metadata to users around the globe. As of January 2006, the directory holds more than 16,300 Earth Science data descriptions and over 1,300 services descriptions. Of these, nearly 4,000 unique ocean-related metadata records are available to the public, with many having direct links to the data. In 2005, the GCMD averaged over 5 million hits a month, with nearly a half million unique hosts for the year. Through the GCMD portal (http://gcmd.nasa.gov/), users can search vast and growing quantities of data and services using controlled keywords, free-text searches, or a combination of both. Users may now refine a search based on topic, location, instrument, platform, project, data center, spatial and temporal coverage, and data resolution for selected datasets. The directory also offers data holders a means to advertise and search their data through customized portals, which are subset views of the directory. The discovery metadata standard used is the Directory Interchange Format (DIF), adopted in 1988. This format has evolved to accommodate other national and international standards such as FGDC and IS019115. Users can submit metadata through easy-to-use online and offline authoring tools. The directory, which also serves as the International Directory Network (IDN), has been providing its services and sharing its experience and knowledge of metadata at the international, national, regional, and local level for many years. Active partners include the Committee on Earth Observation Satellites (CEOS), federal agencies (such as NASA, NOAA, and USGS), international agencies (such as IOC/IODE, UN, and JAXA) and organizations (such as ESIP, IOOS/DMAC, GOSIC, GLOBEC, OBIS, and GoMODP).

  11. A Solr Powered Architecture for Scientific Metadata Search Applications

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Billingsley, B. W.; Harper, D.; Kovarik, J.; Brandt, M.

    2014-12-01

    Discovering and obtaining resources for scientific research is increasingly difficult but Open Source tools have been implemented to provide inexpensive solutions for scientific metadata search applications. Common practices used in modern web applications can improve the quality of scientific data as well as increase availability to a wider audience while reducing costs of maintenance. Motivated to improve discovery and access of scientific metadata hosted at NSIDC and the need to aggregate many areas of arctic research, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) contributed to a shared codebase used by the NSIDC Search and Arctic Data Explorer (ADE) portals. We implemented the NSIDC Search and ADE to improve search and discovery of scientific metadata in many areas of cryospheric research. All parts of the applications are available free and open for reuse in other applications and portals. We have applied common techniques that are widely used by search applications around the web and with the goal of providing quick and easy access to scientific metadata. We adopted keyword search auto-suggest which provides a dynamic list of terms and phrases that closely match characters as the user types. Facet queries are another technique we have implemented to filter results based on aspects of the data like the instrument used or temporal duration of the data set. Service APIs provide a layer between the interface and the database and are shared between the NSIDC Search and ACADIS ADE interfaces. We also implemented a shared data store between both portals using Apache Solr (an Open Source search engine platform that stores and indexes XML documents) and leverage many powerful features including geospatial search and faceting. This presentation will discuss the application architecture as well as tools and techniques used to enhance search and discovery of scientific metadata.

  12. RESTful Access to NOAA's Space Weather Data and Metadata

    NASA Astrophysics Data System (ADS)

    Kihn, E. A.; Elespuru, P. R.; Zhizhin, M.

    2010-12-01

    The Space Physics Interactive Data Resource (SPIDR) (http://spidr.ngdc.noaa.gov) is a web based application for searching, accessing and interacting with NOAA’s space related data holdings. SPIDR serves as one of several interfaces to the National Geophysical Data Center's archived digital holdings. The SPIDR system while successful in delivering data and visualization to clients was also found to be limited in its ability to interact with other programs, its ability to integrate with alternate work-flows and its support for multiple user interfaces (UI). As such in 2006 the SPIDR development team implemented a SOAP based interface to SPIDR through which outside developers could make use of the resource. It was our finding however that despite our best efforts at documentation, the interface remained elusive to many users. That is to say a few strong programmers were able to format and use the XML messaging but in general it did not make the data more accessible. In response SPIDR has been extended to include a REST style web services API for all time series data. This provides direct, synchronous, simple programmatic access to over 200 individual parameters representing space weather data directly from the NGDC archive. In addition to the data service SPIDR has implemented a metadata service which allows users to get Federal Geographic Data Committee (FGDC )style metadata records describing all available data and stations. This metadata will migrate to the NASA Space Physics Archive Search and Extract ( SPASE) style in future versions in order to provide further detail. The combination of data, metadata and visualization tools available through SPIDR combine to make it a powerful virtual observatory (VO). When this is combined with a content rich metadata system we have experience vastly greater user response and usage This talk will present details of the development as well as lessons learned from 10 years of SPIDR development.

  13. Metadata Standards in Theory and Practice: The Human in the Loop

    NASA Astrophysics Data System (ADS)

    Yarmey, L.; Starkweather, S.

    2013-12-01

    Metadata standards are meant to enable interoperability through common, well-defined structures and are a foundation for broader cyberinfrastructure efforts. Standards are central to emerging technologies such as metadata brokering tools supporting distributed data search. However, metadata standards in practice are often poor indicators of standardized, readily interoperable metadata. The International Arctic Systems for Observing the Atmosphere (IASOA) data portal provides discovery and access tools for aggregated datasets from ten long-term international Arctic atmospheric observing stations. The Advanced Cooperative Arctic Data and Information Service (ACADIS) Arctic Data Explorer brokers metadata to provide distributed data search across Arctic repositories. Both the IASOA data portal and the Arctic Data Explorer rely on metadata and metadata standards to support value-add services. Challenges have included: translating between different standards despite existing crosswalks, diverging implementation practices of the same standard across communities, changing metadata practices over time and associated backwards compatibility, reconciling metadata created by data providers with standards, lack of community-accepted definitions for key terms (e.g. ';project'), integrating controlled vocabularies, and others. Metadata record ';validity' or compliance with a standard has been insufficient for interoperability. To overcome these challenges, both projects committed significant work to integrate and offer services over already 'standards compliant' metadata. Both efforts have shown that the 'human-in-the-loop' is still required to fulfill the lofty theoretical promises of metadata standards. In this talk, we 1) summarize the real-world experiences of two data discovery portals working with metadata in standard form, and 2) offer lessons learned for others who work with and rely on metadata and metadata standards.

  14. Replicated spectrographs in astronomy

    NASA Astrophysics Data System (ADS)

    Hill, Gary J.

    2014-06-01

    As telescope apertures increase, the challenge of scaling spectrographic astronomical instruments becomes acute. The next generation of extremely large telescopes (ELTs) strain the availability of glass blanks for optics and engineering to provide sufficient mechanical stability. While breaking the relationship between telescope diameter and instrument pupil size by adaptive optics is a clear path for small fields of view, survey instruments exploiting multiplex advantages will be pressed to find cost-effective solutions. In this review we argue that exploiting the full potential of ELTs will require the barrier of the cost and engineering difficulty of monolithic instruments to be broken by the use of large-scale replication of spectrographs. The first steps in this direction have already been taken with the soon to be commissioned MUSE and VIRUS instruments for the Very Large Telescope and the Hobby-Eberly Telescope, respectively. MUSE employs 24 spectrograph channels, while VIRUS has 150 channels. We compare the information gathering power of these replicated instruments with the present state of the art in more traditional spectrographs, and with instruments under development for ELTs. Design principles for replication are explored along with lessons learned, and we look forward to future technologies that could make massively-replicated instruments even more compelling.

  15. Human Mitochondrial DNA Replication

    PubMed Central

    Holt, Ian J.; Reyes, Aurelio

    2012-01-01

    Elucidation of the process of DNA replication in mitochondria is in its infancy. For many years, maintenance of the mitochondrial genome was regarded as greatly simplified compared to the nucleus. Mammalian mitochondria were reported to lack all DNA repair systems, to eschew DNA recombination, and to possess but a single DNA polymerase, polymerase γ. Polγ was said to replicate mitochondrial DNA exclusively via one mechanism, involving only two priming events and a handful of proteins. In this “strand-displacement model,” leading strand DNA synthesis begins at a specific site and advances approximately two-thirds of the way around the molecule before DNA synthesis is initiated on the “lagging” strand. Although the displaced strand was long-held to be coated with protein, RNA has more recently been proposed in its place. Furthermore, mitochondrial DNA molecules with all the features of products of conventional bidirectional replication have been documented, suggesting that the process and regulation of replication in mitochondria is complex, as befits a genome that is a core factor in human health and longevity. PMID:23143808

  16. SIMS Replications in Ontario.

    ERIC Educational Resources Information Center

    Russell, Howard

    Replication of the Second International Mathematics Study (SIMS) in Ontario, Canada, is described and assessed. The curriculum and testing program covers numerical methods, geometry, and algebra. Whereas classical studies focused on mean scores of a system's students and on percentages of teachers for opportunity to learn (OTL), SIMS aggregates…

  17. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  18. Metadata and Providing Access to e-Books

    ERIC Educational Resources Information Center

    Vasileiou, Magdalini; Rowley, Jennifer; Hartley, Richard

    2013-01-01

    In the very near future, students are likely to expect their universities to provide seamless access to e-books through online library catalogues and virtual learning environments. A paradigm change in terms of the format of books, and especially textbooks, which could have far-reaching impact, is on the horizon. Based on interviews with a number…

  19. The National Digital Information Infrastructure Preservation Program; Metadata Principles and Practicalities; Challenges for Service Providers when Importing Metadata in Digital Libraries; Integrated and Aggregated Reference Services.

    ERIC Educational Resources Information Center

    Friedlander, Amy; Duval, Erik; Hodgins, Wayne; Sutton, Stuart; Weibel, Stuart L.; McClelland, Marilyn; McArthur, David; Giersch, Sarah; Geisler, Gary; Hodgkin, Adam

    2002-01-01

    Includes 6 articles that discuss the National Digital Information Infrastructure Preservation Program at the Library of Congress; metadata in digital libraries; integrated reference services on the Web. (LRW)

  20. Cataloguing Rules for Books and other Media in Primary and Secondary Schools. A Simplified Version of Anglo-American Cataloguing Rules, Together with Rules for Cataloguing Non-book Materials. Fifth (Expanded) Edition.

    ERIC Educational Resources Information Center

    Furlong, Norman, Comp.; Platt, Peter, Comp.

    The School Library Association (London, England) presents a brief and simplified version of the 1967 "Anglo-American Cataloguing Rules" along with cataloging rules for non-book materials based on the "Non-book Materials: Cataloguing Rules" of 1973. The 33 rules are briefly stated, and short examples are given for main entries, titles, imprint,…

  1. [Catalogues of therapeutic nursing activities in neurological early rehabilitation].

    PubMed

    Lautenschläger, S; Wallesch, C W

    2015-02-01

    Under the German DRG-system, hospital-based rehabilitation of still critically ill patients becomes increasingly important. The code for early neurological rehabilitation in the DRG-system's (Diagnosis Related Groups) list of operations and procedures requires an average daily therapeutic intensity of 300 min, part of which is being contributed by therapeutic nursing. As therapeutic aspects are integrated in other nursing activities, it is difficult to separate its time consumption. This problem is pragmatically resolved by catalogues of therapeutic nursing activities which assign plausible amounts of therapeutic minutes to each activity. The 4 catalogues that are used most often are described and compared. Nursing science has not focused yet on therapeutic nursing. PMID:25317957

  2. Chamber catalogues of optical and fluorescent signatures distinguish bioaerosol classes

    NASA Astrophysics Data System (ADS)

    Hernandez, Mark; Perring, Anne E.; McCabe, Kevin; Kok, Greg; Granger, Gary; Baumgardner, Darrel

    2016-07-01

    Rapid bioaerosol characterization has immediate applications in the military, environmental and public health sectors. Recent technological advances have facilitated single-particle detection of fluorescent aerosol in near real time; this leverages controlled ultraviolet exposures with single or multiple wavelengths, followed by the characterization of associated fluorescence. This type of ultraviolet induced fluorescence has been used to detect airborne microorganisms and their fragments in laboratory studies, and it has been extended to field studies that implicate bioaerosol to compose a substantial fraction of supermicron atmospheric particles. To enhance the information yield that new-generation fluorescence instruments can provide, we report the compilation of a referential aerobiological catalogue including more than 50 pure cultures of common airborne bacteria, fungi and pollens, recovered at water activity equilibrium in a mesoscale chamber (1 m3). This catalogue juxtaposes intrinsic optical properties and select bandwidths of fluorescence emissions, which manifest to clearly distinguish between major classes of airborne microbes and pollens.

  3. An annotated catalogue of the Iranian Alysiinae (Hymenoptera: Braconidae).

    PubMed

    Gadallah, Neveen S; Ghahari, Hassan; Peris-Felipo, Francisco Javier; Fischer, Maximilian

    2015-06-19

    In the present study, a catalogue of the Iranian Alysiinae (Hymenoptera: Braconidae) is given. It is based on a detailed study of all available published data. In total 78 species from 15 genera including Alloea Haliday, 1833 (1 species), Angelovia Zaykov, 1980 (1 species), Aphaereta Foerster, 1862 (2 species), Aspilota Foerster, 1862 (2 species), Chorebus Haliday, 1833 (42 species), Coelinidea Viereck, 1913 (2 species), Coloneura Foerster, 1862 (1 species), Dacnusa Haliday, 1833 (10 species), Dinotrema Foerster, 1862 (5 species), Idiasta Foerster, 1862 (1 species), Orthostigma Ratzeburg, 1844 (3 species), Phaenocarpa Foerster, 1862 (1 species), Protodacnusa Griffiths, 1964 (2 species), Pseudopezomachus Mantero, 1905 (2 species), and Synaldis Foerster, 1862 (3 species) are reported in this catalogue. Two species are new records for Iran: Coelinidea elegans (Curtis, 1829) and Dacnusa (Pachysema) aterrima Thomson, 1895. Also, a faunistic list with distribution data and host records is provided.

  4. Catalogue of snout mites (Acariformes: Bdellidae) of the world.

    PubMed

    Hernandes, Fabio A; Skvarla, Michael J; Fisher, J Ray; Dowling, Ashley P G; Ochoa, Ronald; Ueckermann, Edward A; Bauchan, Gary R

    2016-01-01

    Bdellidae (Trombidiformes: Prostigmata) are moderate to large sized predatory mites that inhabit soil, leaves, leaf litter, and intertidal rocks. They are readily recognized by an elongated, snout-like gnathosoma and by elbowed pedipalps bearing two (one in Monotrichobdella Baker & Balock) long terminal setae. Despite being among the first mites ever described, with species described by Carl Linnaeus, the knowledge about bdellids has never been compiled into a taxonomic catalogue. Here we present a catalogue listing 278 valid species; for each species we include distribution information, taxonomic literature, and type depository institutions. The genus Rigibdella Tseng, 1978 is considered a junior synonym of Cyta von Heyden, 1826, and Bdellodes Oudemans, 1937 is considered a junior synonym of Odontoscirus Tohr, 1913. Illustrated keys to subfamilies and genera are presented, as well as keys to species of each genus. PMID:27615820

  5. Compartmentalization of prokaryotic DNA replication.

    PubMed

    Bravo, Alicia; Serrano-Heras, Gemma; Salas, Margarita

    2005-01-01

    It becomes now apparent that prokaryotic DNA replication takes place at specific intracellular locations. Early studies indicated that chromosomal DNA replication, as well as plasmid and viral DNA replication, occurs in close association with the bacterial membrane. Moreover, over the last several years, it has been shown that some replication proteins and specific DNA sequences are localized to particular subcellular regions in bacteria, supporting the existence of replication compartments. Although the mechanisms underlying compartmentalization of prokaryotic DNA replication are largely unknown, the docking of replication factors to large organizing structures may be important for the assembly of active replication complexes. In this article, we review the current state of this subject in two bacterial species, Escherichia coli and Bacillus subtilis, focusing our attention in both chromosomal and extrachromosomal DNA replication. A comparison with eukaryotic systems is also presented.

  6. The Distribution of separations of DMSA Hipparcos Catalogue

    NASA Astrophysics Data System (ADS)

    Ling, J. F.; Magdalena, P.; Prieto, C.

    2004-08-01

    We have constructed volume-limited samples of wide binaries in the Hipparcos Catalogue Double and Multiple Systems Annex (DMSA, Section C), out to distances of 100 pc and 200 pc. We study the distribution of linear separations for these samples of binaries. We find that they closely follow Öpik's distribution in the interval of separations between about 10 and 800 AU (for the 100 pc sample), and between 15 and 1400 AU (for the 200 pc sample)

  7. Annotated catalogue of Australian weevils  (Coleoptera: Curculionoidea).

    PubMed

    Pullen, Kimberi R; Jennings, Debbie; Oberprieler, Rolf G

    2014-01-01

    This catalogue presents the first-ever complete inventory of all described taxa of Australian weevils, including both valid and invalid names. The geographical scope spans mainland Australia and its continental islands as well as the subantarctic Heard and McDonald Islands, the Pacific Lord Howe and Norfolk Islands and the Indian-Ocean Christmas Island. 4111 species in 832 genera (including one extinct species and one fossil) are recognised as occurring in this territory, distributed over seven families, 20 subfamilies and 94 tribes. The families and subfamilies are arranged in a currently accepted phylogenetic sequence but the tribes, genera and species in alphabetical order. Introductory chapters outline the discovery and composition of the Australian weevil fauna, the burden of synonymy, the format and conventions of the catalogue and the taxonomic and nomenclatural changes proposed. Sixteen new genera and six new species are described, two new names and 25 new generic and 72 new species synonymies and 189 new combinations are proposed and 46 type species designations are effected. The records of 356 taxa are annotated to justify or explain various taxonomic and nomenclatural acts and issues, covering descriptions of new taxa, new synonymies and generic combinations, artificial taxon concepts, changes in classification and a number of nomenclatural matters. The catalogue of the taxa present in Australia is followed by a list of 19 species incorrectly recorded from Australia or introduced as biocontrol agents but not established, and by one species inquirenda. All these records are also annotated. Two appendices list the 102 species introduced into Australia, both accidental and deliberate (as weed control agents). A bibliography with full references of all original descriptions and pertinent other citations from the literature is provided, and an index to all names concludes the catalogue.

  8. THROES: A Catalogue of Herschel Observations of Evolved Stars

    NASA Astrophysics Data System (ADS)

    Ramos-Medina, J.; Sánchez-Contreras, C.; García-Lario, P.; Rodrigo, C.

    2015-12-01

    We are building a catalogue of fully-reprocessed observations of all evolved stars observed with Herschel (THROES). In a first stage, we focus on observations performed with the PACS instrument in its full range spectroscopy mode. Once finished, the catalogue will offer all reduced data for each observation, as well as, complementary information from other observatories. As a first step, we concentrate our efforts on two main activities: 1) the reprocessing and data-reduction of more than 200 individual sources, observed by Herschel/PACS in the 55-210 micron range, available in the Herschel Science Archive (HSA); 2) The creation of an initial catalogue, accesible via web and the Virtual Observatory (VO), with all the information relative to PACS observations and the classification of the sources. Our ultimate goal will be to carry out a comprehensive and systematic study of the far infrared properties of low-and intermediate-mass (1-8 FX1) evolved stars using these data. These objects cover the whole range of possible evolutionary stages in this short-lived phase of stellar evolution, from AGB phase to the PN stage, displaying a wide variety of chemical and physical properties.

  9. NASA space geodesy program: Catalogue of site information

    NASA Technical Reports Server (NTRS)

    Bryant, M. A.; Noll, C. E.

    1993-01-01

    This is the first edition of the NASA Space Geodesy Program: Catalogue of Site Information. This catalogue supersedes all previous versions of the Crustal Dynamics Project: Catalogue of Site Information, last published in May 1989. This document is prepared under the direction of the Space Geodesy and Altimetry Projects Office (SGAPO), Code 920.1, Goddard Space Flight Center. SGAPO has assumed the responsibilities of the Crustal Dynamics Project, which officially ended December 31, 1991. The catalog contains information on all NASA supported sites as well as sites from cooperating international partners. This catalog is designed to provde descriptions and occupation histories of high-accuracy geodetic measuring sites employing space-related techniques. The emphasis of the catalog has been in the past, and continues to be with this edition, station information for facilities and remote locations utilizing the Satellite Laser Ranging (SLR), Lunar Laser Ranging (LLR), and Very Long Baseline Interferometry (VLBI) techniques. With the proliferation of high-quality Global Positioning System (GPS) receivers and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) transponders, many co-located at established SLR and VLBI observatories, the requirement for accurate station and localized survey information for an ever broadening base of scientists and engineers has been recognized. It is our objective to provide accurate station information to scientific groups interested in these facilities.

  10. Developing a Metadata Infrastructure to facilitate data driven science gateway and to provide Inspire/GEMINI compliance for CLIPC

    NASA Astrophysics Data System (ADS)

    Mihajlovski, Andrej; Plieger, Maarten; Som de Cerff, Wim; Page, Christian

    2016-04-01

    indicators Key is the availability of standardized metadata, describing indicator data and services. This will enable standardization and interoperability between the different distributed services of CLIPC. To disseminate CLIPC indicator data, transformed data products to enable impacts assessments and climate change impact indicators a standardized meta-data infrastructure is provided. The challenge is that compliance of existing metadata to INSPIRE ISO standards and GEMINI standards needs to be extended to further allow the web portal to be generated from the available metadata blueprint. The information provided in the headers of netCDF files available through multiple catalogues, allow us to generate ISO compliant meta data which is in turn used to generate web based interface content, as well as OGC compliant web services such as WCS and WMS for front end and WPS interactions for the scientific users to combine and generate new datasets. The goal of the metadata infrastructure is to provide a blueprint for creating a data driven science portal, generated from the underlying: GIS data, web services and processing infrastructure. In the presentation we will present the results and lessons learned.

  11. Replicators, lineages, and interactors.

    PubMed

    Taylor, Daniel J; Bryson, Joanna J

    2014-06-01

    The target article argues that whole groups can act as interactors in an evolutionary process. We believe that Smaldino's discussion would be advanced by a more thorough analysis of the appropriate replicators and lineages for this model. We show that cultural evolution is necessarily a separate process from cultural group selection, and we also illustrate that the two processes may influence each other as demonstrated by an agent-based model of communicating food-processing skills. PMID:24970423

  12. A New Replicator: A theoretical framework for analysing replication

    PubMed Central

    2010-01-01

    Background Replicators are the crucial entities in evolution. The notion of a replicator, however, is far less exact than the weight of its importance. Without identifying and classifying multiplying entities exactly, their dynamics cannot be determined appropriately. Therefore, it is importance to decide the nature and characteristics of any multiplying entity, in a detailed and formal way. Results Replication is basically an autocatalytic process which enables us to rest on the notions of formal chemistry. This statement has major implications. Simple autocatalytic cycle intermediates are considered as non-informational replicators. A consequence of which is that any autocatalytically multiplying entity is a replicator, be it simple or overly complex (even nests). A stricter definition refers to entities which can inherit acquired changes (informational replicators). Simple autocatalytic molecules (and nests) are excluded from this group. However, in turn, any entity possessing copiable information is to be named a replicator, even multicellular organisms. In order to deal with the situation, an abstract, formal framework is presented, which allows the proper identification of various types of replicators. This sheds light on the old problem of the units and levels of selection and evolution. A hierarchical classification for the partition of the replicator-continuum is provided where specific replicators are nested within more general ones. The classification should be able to be successfully applied to known replicators and also to future candidates. Conclusion This paper redefines the concept of the replicator from a bottom-up theoretical approach. The formal definition and the abstract models presented can distinguish between among all possible replicator types, based on their quantity of variable and heritable information. This allows for the exact identification of various replicator types and their underlying dynamics. The most important claim is that

  13. Complexity transmission during replication

    PubMed Central

    Davis, Brian K.

    1979-01-01

    The transmission of complexity during DNA replication has been investigated to clarify the significance of this molecular property in a deterministic process. Complexity was equated with the amount of randomness within an ordered molecular structure and measured by the entropy of a posteriori probabilities for discrete (monomer sequences, atomic bonds) and continuous (torsion angle sequences) structural parameters in polynucleotides, proteins, and ligand molecules. A theoretical analysis revealed that sequence complexity decreases during transmission from DNA to protein. It was also found that sequence complexity limits the attainable complexity in the folding of a polypeptide chain and that a protein cannot interact with a ligand moiety of higher complexity. The analysis indicated, furthermore, that in any deterministic molecular process a cause possesses more complexity than its effect. This outcome broadly complies with Curie's symmetry principle. Results from an analysis of an extensive set of experimental data are presented; they corroborate these findings. It is suggested, therefore, that complexity governs the direction of order—order molecular transformations. Two biological implications are (i) replication of DNA in a stepwise, repetitive manner by a polymerase appears to be a necessary consequence of structural constraints imposed by complexity, and (ii) during evolution, increases in complexity had to involve a nondeterministic mechanism. This latter requirement apparently applied also to development of the first replicating system on earth. PMID:287070

  14. The VIMOS-VLT deep survey: the group catalogue

    NASA Astrophysics Data System (ADS)

    Cucciati, O.; Marinoni, C.; Iovino, A.; Bardelli, S.; Adami, C.; Mazure, A.; Scodeggio, M.; Maccagni, D.; Temporin, S.; Zucca, E.; De Lucia, G.; Blaizot, J.; Garilli, B.; Meneux, B.; Zamorani, G.; Le Fèvre, O.; Cappi, A.; Guzzo, L.; Bottini, D.; Le Brun, V.; Tresse, L.; Vettolani, G.; Zanichelli, A.; Arnouts, S.; Bolzonella, M.; Charlot, S.; Ciliegi, P.; Contini, T.; Foucaud, S.; Franzetti, P.; Gavignaud, I.; Ilbert, O.; Lamareille, F.; McCracken, H. J.; Marano, B.; Merighi, R.; Paltani, S.; Pellò, R.; Pollo, A.; Pozzetti, L.; Vergani, D.; Pérez-Montero, E.

    2010-09-01

    Aims: We present a homogeneous and complete catalogue of optical galaxy groups identified in the purely flux-limited (17.5 ≤ IAB ≤ 24.0) VIMOS-VLT deep redshift Survey (VVDS). Methods: We use mock catalogues extracted from the Millennium Simulation, to correct for potential systematics that might affect the overall distribution as well as the individual properties of the identified systems. Simulated samples allow us to forecast the number and properties of groups that can be potentially found in a survey with VVDS-like selection functions. We use them to correct for the expected incompleteness and, to asses in addition, how well galaxy redshifts trace the line-of-sight velocity dispersion of the underlying mass overdensity. In particular, on these mock catalogues we train the adopted group-finding technique i.e., the Voronoi-Delaunay Method (VDM). The goal is to fine-tune its free parameters, recover in a robust and unbiased way the redshift and velocity dispersion distributions of groups (n(z) and n(σ), respectively), and maximize, at the same time, the level of completeness and purity of the group catalogue. Results: We identify 318 VVDS groups with at least 2 members in the range 0.2 ≤ z ≤ 1.0, among which 144 (/30) with at least 3 (/5) members. The sample has an overall completeness of ~60% and a purity of ~50%. Nearly 45% of the groups with at least 3 members are still recovered if we run the algorithm with a particular parameter set that maximizes the purity (~75%) of the resulting catalogue. We use the group sample to explore the redshift evolution of the fraction fb of blue galaxies (U-B ≤ 1) in the redshift range 0.2 ≤ z ≤ 1. We find that the fraction of blue galaxies is significantly lower in groups than in the global population (i.e. in the whole ensemble of galaxies irrespective of their environment). Both of these quantities increase with redshift, the fraction of blue galaxies in groups exhibiting a marginally significant steeper

  15. Quality in Learning Objects: Evaluating Compliance with Metadata Standards

    NASA Astrophysics Data System (ADS)

    Vidal, C. Christian; Segura, N. Alejandra; Campos, S. Pedro; Sánchez-Alonso, Salvador

    Ensuring a certain level of quality of learning objects used in e-learning is crucial to increase the chances of success of automated systems in recommending or finding these resources. This paper aims to present a proposal for implementation of a quality model for learning objects based on ISO 9126 international standard for the evaluation of software quality. Features indicators associated with the conformance sub-characteristic are defined. Some instruments for feature evaluation are advised, which allow collecting expert opinion on evaluation items. Other quality model features are evaluated using only the information from its metadata using semantic web technologies. Finally, we propose an ontology-based application that allows automatic evaluation of a quality feature. IEEE LOM metadata standard was used in experimentation, and the results shown that most of learning objects analyzed do not complain the standard.

  16. OntoSoft: An Ontology for Capturing Scientific Software Metadata

    NASA Astrophysics Data System (ADS)

    Gil, Y.

    2015-12-01

    We have developed OntoSoft, an ontology to describe metadata for scientific software. The ontology is designed considering how scientists would approach the reuse and sharing of software. This includes supporting a scientist to: 1) identify software, 2) understand and assess software, 3) execute software, 4) get support for the software, 5) do research with the software, and 6) update the software. The ontology is available in OWL and contains more than fifty terms. We have used OntoSoft to structure the OntoSoft software registry for geosciences, and to develop user interfaces to capture its metadata. OntoSoft is part of the NSF EarthCube initiative and contributes to its vision of scientific knowledge sharing, in this case about scientific software.

  17. A case for user-generated sensor metadata

    NASA Astrophysics Data System (ADS)

    Nüst, Daniel

    2015-04-01

    Cheap and easy to use sensing technology and new developments in ICT towards a global network of sensors and actuators promise previously unthought of changes for our understanding of the environment. Large professional as well as amateur sensor networks exist, and they are used for specific yet diverse applications across domains such as hydrology, meteorology or early warning systems. However the impact this "abundance of sensors" had so far is somewhat disappointing. There is a gap between (community-driven) sensor networks that could provide very useful data and the users of the data. In our presentation, we argue this is due to a lack of metadata which allows determining the fitness of use of a dataset. Syntactic or semantic interoperability for sensor webs have made great progress and continue to be an active field of research, yet they often are quite complex, which is of course due to the complexity of the problem at hand. But still, we see the most generic information to determine fitness for use is a dataset's provenance, because it allows users to make up their own minds independently from existing classification schemes for data quality. In this work we will make the case how curated user-contributed metadata has the potential to improve this situation. This especially applies for scenarios in which an observed property is applicable in different domains, and for set-ups where the understanding about metadata concepts and (meta-)data quality differs between data provider and user. On the one hand a citizen does not understand the ISO provenance metadata. On the other hand a researcher might find issues in publicly accessible time series published by citizens, which the latter might not be aware of or care about. Because users will have to determine fitness for use for each application on their own anyway, we suggest an online collaboration platform for user-generated metadata based on an extremely simplified data model. In the most basic fashion

  18. CellML metadata standards, associated tools and repositories.

    PubMed

    Beard, Daniel A; Britten, Randall; Cooling, Mike T; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Lawson, James; Lloyd, Catherine M; Marsh, Justin; Miller, Andrew; Nickerson, David P; Nielsen, Poul M F; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M; Yu, Tommy

    2009-05-28

    The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website.

  19. Content-aware network storage system supporting metadata retrieval

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Qin, Leihua; Zhou, Jingli; Nie, Xuejun

    2008-12-01

    Nowadays, content-based network storage has become the hot research spot of academy and corporation[1]. In order to solve the problem of hit rate decline causing by migration and achieve the content-based query, we exploit a new content-aware storage system which supports metadata retrieval to improve the query performance. Firstly, we extend the SCSI command descriptor block to enable system understand those self-defined query requests. Secondly, the extracted metadata is encoded by extensible markup language to improve the universality. Thirdly, according to the demand of information lifecycle management (ILM), we store those data in different storage level and use corresponding query strategy to retrieval them. Fourthly, as the file content identifier plays an important role in locating data and calculating block correlation, we use it to fetch files and sort query results through friendly user interface. Finally, the experiments indicate that the retrieval strategy and sort algorithm have enhanced the retrieval efficiency and precision.

  20. Solar Dynamics Observatory Data Search using Metadata in the KDC

    NASA Astrophysics Data System (ADS)

    Hwang, E.; Choi, S.; Baek, J.-H.; Park, J.; Lee, J.; Cho, K.

    2015-09-01

    We have constructed the Korean Data Center (KDC) for the Solar Dynamics Observatory (SDO) in the Korea Astronomy and Space Science Institute (KASI). The SDO comprises three instruments; the Atmospheric Imaging Assembly (AIA), the Helioseismic and Magnetic Imager (HMI), and the Extreme Ultraviolet Variability Experiment (EVE). We archive AIA and HMI FITS data. The size of data is about 1 TB of a day. The goal of KDC for SDO is to provide easy and fast access service to the data for researchers in Asia. In order to improve the data search rate, we designed the system to search data without going through a process of database query. The fields of instrument, wavelength, data path, date, and time are saved as a text file. This metadata file and SDO FITS data can be simply accessed via HTTP and are open to the public. We present a process of creating metadata and a way to access SDO FITS data in detail.

  1. CAMELOT: Cloud Archive for MEtadata, Library and Online Toolkit

    NASA Astrophysics Data System (ADS)

    Ginsburg, Adam; Kruijssen, J. M. Diederik; Longmore, Steven N.; Koch, Eric; Glover, Simon C. O.; Dale, James E.; Commerçon, Benoît; Giannetti, Andrea; McLeod, Anna F.; Testi, Leonardo; Zahorecz, Sarolta; Rathborne, Jill M.; Zhang, Qizhou; Fontani, Francesco; Beltrán, Maite T.; Rivilla, Victor M.

    2016-05-01

    CAMELOT facilitates the comparison of observational data and simulations of molecular clouds and/or star-forming regions. The central component of CAMELOT is a database summarizing the properties of observational data and simulations in the literature through pertinent metadata. The core functionality allows users to upload metadata, search and visualize the contents of the database to find and match observations/simulations over any range of parameter space. To bridge the fundamental disconnect between inherently 2D observational data and 3D simulations, the code uses key physical properties that, in principle, are straightforward for both observers and simulators to measure — the surface density (Sigma), velocity dispersion (sigma) and radius (R). By determining these in a self-consistent way for all entries in the database, it should be possible to make robust comparisons.

  2. CellML metadata standards, associated tools and repositories

    PubMed Central

    Beard, Daniel A.; Britten, Randall; Cooling, Mike T.; Garny, Alan; Halstead, Matt D.B.; Hunter, Peter J.; Lawson, James; Lloyd, Catherine M.; Marsh, Justin; Miller, Andrew; Nickerson, David P.; Nielsen, Poul M.F.; Nomura, Taishin; Subramanium, Shankar; Wimalaratne, Sarala M.; Yu, Tommy

    2009-01-01

    The development of standards for encoding mathematical models is an important component of model building and model sharing among scientists interested in understanding multi-scale physiological processes. CellML provides such a standard, particularly for models based on biophysical mechanisms, and a substantial number of models are now available in the CellML Model Repository. However, there is an urgent need to extend the current CellML metadata standard to provide biological and biophysical annotation of the models in order to facilitate model sharing, automated model reduction and connection to biological databases. This paper gives a broad overview of a number of new developments on CellML metadata and provides links to further methodological details available from the CellML website. PMID:19380315

  3. Discovery of Marine Datasets and Geospatial Metadata Visualization

    NASA Astrophysics Data System (ADS)

    Schwehr, K. D.; Brennan, R. T.; Sellars, J.; Smith, S.

    2009-12-01

    NOAA's National Geophysical Data Center (NGDC) provides the deep archive of US multibeam sonar hydrographic surveys. NOAA stores the data as Bathymetric Attributed Grids (BAG; http://www.opennavsurf.org/) that are HDF5 formatted files containing gridded bathymetry, gridded uncertainty, and XML metadata. While NGDC provides the deep store and a basic ERSI ArcIMS interface to the data, additional tools need to be created to increase the frequency with which researchers discover hydrographic surveys that might be beneficial for their research. Using Open Source tools, we have created a draft of a Google Earth visualization of NOAA's complete collection of BAG files as of March 2009. Each survey is represented as a bounding box, an optional preview image of the survey data, and a pop up placemark. The placemark contains a brief summary of the metadata and links to directly download of the BAG survey files and the complete metadata file. Each survey is time tagged so that users can search both in space and time for surveys that meet their needs. By creating this visualization, we aim to make the entire process of data discovery, validation of relevance, and download much more efficient for research scientists who may not be familiar with NOAA's hydrographic survey efforts or the BAG format. In the process of creating this demonstration, we have identified a number of improvements that can be made to the hydrographic survey process in order to make the results easier to use especially with respect to metadata generation. With the combination of the NGDC deep archiving infrastructure, a Google Earth virtual globe visualization, and GeoRSS feeds of updates, we hope to increase the utilization of these high-quality gridded bathymetry. This workflow applies equally well to LIDAR topography and bathymetry. Additionally, with proper referencing and geotagging in journal publications, we hope to close the loop and help the community create a true “Geospatial Scholar

  4. The Planetary Data System Information Model for Geometry Metadata

    NASA Astrophysics Data System (ADS)

    Guinness, E. A.; Gordon, M. K.

    2014-12-01

    The NASA Planetary Data System (PDS) has recently developed a new set of archiving standards based on a rigorously defined information model. An important part of the new PDS information model is the model for geometry metadata, which includes, for example, attributes of the lighting and viewing angles of observations, position and velocity vectors of a spacecraft relative to Sun and observing body at the time of observation and the location and orientation of an observation on the target. The PDS geometry model is based on requirements gathered from the planetary research community, data producers, and software engineers who build search tools. A key requirement for the model is that it fully supports the breadth of PDS archives that include a wide range of data types from missions and instruments observing many types of solar system bodies such as planets, ring systems, and smaller bodies (moons, comets, and asteroids). Thus, important design aspects of the geometry model are that it standardizes the definition of the geometry attributes and provides consistency of geometry metadata across planetary science disciplines. The model specification also includes parameters so that the context of values can be unambiguously interpreted. For example, the reference frame used for specifying geographic locations on a planetary body is explicitly included with the other geometry metadata parameters. The structure and content of the new PDS geometry model is designed to enable both science analysis and efficient development of search tools. The geometry model is implemented in XML, as is the main PDS information model, and uses XML schema for validation. The initial version of the geometry model is focused on geometry for remote sensing observations conducted by flyby and orbiting spacecraft. Future releases of the PDS geometry model will be expanded to include metadata for landed and rover spacecraft.

  5. Standardizing metadata and taxonomic identification in metabarcoding studies.

    PubMed

    Tedersoo, Leho; Ramirez, Kelly S; Nilsson, R Henrik; Kaljuvee, Aivi; Kõljalg, Urmas; Abarenkov, Kessy

    2015-01-01

    High-throughput sequencing-based metabarcoding studies produce vast amounts of ecological data, but a lack of consensus on standardization of metadata and how to refer to the species recovered severely hampers reanalysis and comparisons among studies. Here we propose an automated workflow covering data submission, compression, storage and public access to allow easy data retrieval and inter-study communication. Such standardized and readily accessible datasets facilitate data management, taxonomic comparisons and compilation of global metastudies. PMID:26236474

  6. The CellML Metadata Framework 2.0 Specification.

    PubMed

    Cooling, Michael T; Hunter, Peter

    2015-01-01

    The CellML Metadata Framework 2.0 is a modular framework that describes how semantic annotations should be made about mathematical models encoded in the CellML (www.cellml.org) format, and their elements. In addition to the Core specification, there are several satellite specifications, each designed to cater for model annotation in a different context. Basic Model Information, Citation, License and Biological Annotation specifications are presented. PMID:26528558

  7. Observations Metadata Database at the European Southern Observatory

    NASA Astrophysics Data System (ADS)

    Dobrzycki, A.; Brandt, D.; Giot, D.; Lockhart, J.; Rodriguez, J.; Rossat, N.; Vuong, M. H.

    2007-10-01

    We describe the design of the new system for handling observations metadata at the Science Archive Facility of the European Southern Observatory using Sybase IQ. The current system, based primarily on Sybase ASE, allows direct access to a subset of observation parameters. The new system will allow for the browsing of Archive contents using searches on any parameter, for off-line updates on all parameters and for the on-the-fly introduction of those updates on files retrieved from the Archive.

  8. Automated Atmospheric Composition Dataset Level Metadata Discovery. Difficulties and Surprises

    NASA Astrophysics Data System (ADS)

    Strub, R. F.; Falke, S. R.; Kempler, S.; Fialkowski, E.; Goussev, O.; Lynnes, C.

    2015-12-01

    The Atmospheric Composition Portal (ACP) is an aggregator and curator of information related to remotely sensed atmospheric composition data and analysis. It uses existing tools and technologies and, where needed, enhances those capabilities to provide interoperable access, tools, and contextual guidance for scientists and value-adding organizations using remotely sensed atmospheric composition data. The initial focus is on Essential Climate Variables identified by the Global Climate Observing System - CH4, CO, CO2, NO2, O3, SO2 and aerosols. This poster addresses our efforts in building the ACP Data Table, an interface to help discover and understand remotely sensed data that are related to atmospheric composition science and applications. We harvested GCMD, CWIC, GEOSS metadata catalogs using machine to machine technologies - OpenSearch, Web Services. We also manually investigated the plethora of CEOS data providers portals and other catalogs where that data might be aggregated. This poster is our experience of the excellence, variety, and challenges we encountered.Conclusions:1.The significant benefits that the major catalogs provide are their machine to machine tools like OpenSearch and Web Services rather than any GUI usability improvements due to the large amount of data in their catalog.2.There is a trend at the large catalogs towards simulating small data provider portals through advanced services. 3.Populating metadata catalogs using ISO19115 is too complex for users to do in a consistent way, difficult to parse visually or with XML libraries, and too complex for Java XML binders like CASTOR.4.The ability to search for Ids first and then for data (GCMD and ECHO) is better for machine to machine operations rather than the timeouts experienced when returning the entire metadata entry at once. 5.Metadata harvest and export activities between the major catalogs has led to a significant amount of duplication. (This is currently being addressed) 6.Most (if not

  9. A Common Model To Support Interoperable Metadata: Progress Report on Reconciling Metadata Requirements from the Dublin Core and INDECS/DOI Communities.

    ERIC Educational Resources Information Center

    Bearman, David; Rust, Godfrey; Weibel, Stuart; Miller, Eric; Trant, Jennifer

    1999-01-01

    The Dublin Core metadata community and the INDECS/DOI community of authors, rights holders, and publishers are seeking common ground in the expression of metadata for information resources. An open "Schema Harmonization" working group has been established to identify a common framework to support interoperability among these communities.…

  10. Precision Pointing Reconstruction and Geometric Metadata Generation for Cassini Images

    NASA Astrophysics Data System (ADS)

    French, Robert S.; Showalter, Mark R.; Gordon, Mitchell K.

    2014-11-01

    Analysis of optical remote sensing (ORS) data from the Cassini spacecraft is a complicated and labor-intensive process. First, small errors in Cassini’s pointing information (up to ~40 pixels for the Imaging Science Subsystem Narrow Angle Camera) must be corrected so that the line of sight vector for each pixel is known. This process involves matching the image contents with known features such as stars, ring edges, or moon limbs. Second, metadata for each pixel must be computed. Depending on the object under observation, this metadata may include lighting geometry, moon or planet latitude and longitude, and/or ring radius and longitude. Both steps require mastering the SPICE toolkit, a highly capable piece of software with a steep learning curve. Only after these steps are completed can the actual scientific investigation begin.We are embarking on a three-year project to perform these steps for all 300,000+ Cassini ISS images as well as images taken by the VIMS, UVIS, and CIRS instruments. The result will be a series of SPICE kernels that include accurate pointing information and a series of backplanes that include precomputed metadata for each pixel. All data will be made public through the PDS Rings Node (http://www.pds-rings.seti.org). We expect this project to dramatically decrease the time required for scientists to analyze Cassini data. In this poster we discuss the project, our current status, and our plans for the next three years.

  11. Experiences Using a Meta-Data Based Integration Infrastructure

    SciTech Connect

    Critchlow, T.; Masick, R.; Slezak, T.

    1999-07-08

    A data warehouse that presents data from many of the genomics community data sources in a consistent, intuitive fashion has long been a goal of bioinformatics. Unfortunately, it is one of the goals that has not yet been achieved. One of the major problems encountered by previous attempts has been the high cost of creating and maintaining a warehouse in a dynamic environment. In this abstract we have outlined a meta-data based approach to integrating data sources that begins to address this problem. We have used this infrastructure to successfully integrate new sources into an existing warehouse in substantially less time than would have traditionally been required--and the resulting mediators are more maintainable than the traditionally defined ones would have been. In the final paper, we will describe in greater detail both our architecture and our experiences using this framework. In particular, we will outline the new, XML based representation of the meta-data, describe how the mediator generator works, and highlight other potential uses for the meta-data.

  12. DicomBrowser: software for viewing and modifying DICOM metadata.

    PubMed

    Archie, Kevin A; Marcus, Daniel S

    2012-10-01

    Digital Imaging and Communications in Medicine (DICOM) is the dominant standard for medical imaging data. DICOM-compliant devices and the data they produce are generally designed for clinical use and often do not match the needs of users in research or clinical trial settings. DicomBrowser is software designed to ease the transition between clinically oriented DICOM tools and the specialized workflows of research imaging. It supports interactive loading and viewing of DICOM images and metadata across multiple studies and provides a rich and flexible system for modifying DICOM metadata. Users can make ad hoc changes in a graphical user interface, write metadata modification scripts for batch operations, use partly automated methods that guide users to modify specific attributes, or combine any of these approaches. DicomBrowser can save modified objects as local files or send them to a DICOM storage service using the C-STORE network protocol. DicomBrowser is open-source software, available for download at http://nrg.wustl.edu/software/dicom-browser. PMID:22349992

  13. Metadata from data: identifying holidays from anesthesia data.

    PubMed

    Starnes, Joseph R; Wanderer, Jonathan P; Ehrenfeld, Jesse M

    2015-05-01

    The increasingly large databases available to researchers necessitate high-quality metadata that is not always available. We describe a method for generating this metadata independently. Cluster analysis and expectation-maximization were used to separate days into holidays/weekends and regular workdays using anesthesia data from Vanderbilt University Medical Center from 2004 to 2014. This classification was then used to describe differences between the two sets of days over time. We evaluated 3802 days and correctly categorized 3797 based on anesthesia case time (representing an error rate of 0.13%). Use of other metrics for categorization, such as billed anesthesia hours and number of anesthesia cases per day, led to similar results. Analysis of the two categories showed that surgical volume increased more quickly with time for non-holidays than holidays (p < 0.001). We were able to successfully generate metadata from data by distinguishing holidays based on anesthesia data. This data can then be used for economic analysis and scheduling purposes. It is possible that the method can be expanded to similar bimodal and multimodal variables.

  14. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  15. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  16. Replication Research and Special Education

    ERIC Educational Resources Information Center

    Travers, Jason C.; Cook, Bryan G.; Therrien, William J.; Coyne, Michael D.

    2016-01-01

    Replicating previously reported empirical research is a necessary aspect of an evidence-based field of special education, but little formal investigation into the prevalence of replication research in the special education research literature has been conducted. Various factors may explain the lack of attention to replication of special education…

  17. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http

  18. ncISO Facilitating Metadata and Scientific Data Discovery

    NASA Astrophysics Data System (ADS)

    Neufeld, D.; Habermann, T.

    2011-12-01

    Increasing the usability and availability climate and oceanographic datasets for environmental research requires improved metadata and tools to rapidly locate and access relevant information for an area of interest. Because of the distributed nature of most environmental geospatial data, a common approach is to use catalog services that support queries on metadata harvested from remote map and data services. A key component to effectively using these catalog services is the availability of high quality metadata associated with the underlying data sets. In this presentation, we examine the use of ncISO, and Geoportal as open source tools that can be used to document and facilitate access to ocean and climate data available from Thematic Realtime Environmental Distributed Data Services (THREDDS) data services. Many atmospheric and oceanographic spatial data sets are stored in the Network Common Data Format (netCDF) and served through the Unidata THREDDS Data Server (TDS). NetCDF and THREDDS are becoming increasingly accepted in both the scientific and geographic research communities as demonstrated by the recent adoption of netCDF as an Open Geospatial Consortium (OGC) standard. One important source for ocean and atmospheric based data sets is NOAA's Unified Access Framework (UAF) which serves over 3000 gridded data sets from across NOAA and NOAA-affiliated partners. Due to the large number of datasets, browsing the data holdings to locate data is impractical. Working with Unidata, we have created a new service for the TDS called "ncISO", which allows automatic generation of ISO 19115-2 metadata from attributes and variables in TDS datasets. The ncISO metadata records can be harvested by catalog services such as ESSI-labs GI-Cat catalog service, and ESRI's Geoportal which supports query through a number of services, including OpenSearch and Catalog Services for the Web (CSW). ESRI's Geoportal Server provides a number of user friendly search capabilities for end users

  19. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    USGS Publications Warehouse

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  20. Using a Simple Knowledge Organization System to facilitate Catalogue and Search for the ESA CCI Open Data Portal

    NASA Astrophysics Data System (ADS)

    Wilson, Antony; Bennett, Victoria; Donegan, Steve; Juckes, Martin; Kershaw, Philip; Petrie, Ruth; Stephens, Ag; Waterfall, Alison

    2016-04-01

    The ESA Climate Change Initiative (CCI) is a €75m programme that runs from 2009-2016, with a goal to provide stable, long-term, satellite-based essential climate variable (ECV) data products for climate modellers and researchers. As part of the CCI, ESA have funded the Open Data Portal project to establish a central repository to bring together the data from these multiple sources and make it available in a consistent way, in order to maximise its dissemination amongst the international user community. Search capabilities are a critical component to attaining this goal. To this end, the project is providing dataset-level metadata in the form of ISO 19115 records served via a standard OGC CSW interface. In addition, the Open Data Portal is re-using the search system from the Earth System Grid Federation (ESGF), successfully applied to support CMIP5 (5th Coupled Model Intercomparison Project) and obs4MIPs. This uses a tightly defined controlled vocabulary of metadata terms, the DRS (The Data Reference Syntax) which encompass different aspects of the data. This system hs facilitated the construction of a powerful faceted search interface to enable users to discover data at the individual file level of granularity through ESGF's web portal frontend. The use of a consistent set of model experiments for CMIP5 allowed the definition of a uniform DRS for all model data served from ESGF. For CCI however, there are thirteen ECVs, each of which is derived from multiple sources and different science communities resulting in highly heterogeneous metadata. An analysis has been undertaken of the concepts in use, with the aim to produce a CCI DRS which could be provide a single authoritative source for cataloguing and searching the CCI data for the Open Data Portal. The use of SKOS (Simple Knowledge Organization System) and OWL (Web Ontology Language) to represent the DRS are a natural fit and provide controlled vocabularies as well as a way to represent relationships between

  1. Sea Level Station Metadata for Tsunami Detection, Warning and Research

    NASA Astrophysics Data System (ADS)

    Stroker, K. J.; Marra, J.; Kari, U. S.; Weinstein, S. A.; Kong, L.

    2007-12-01

    The devastating earthquake and tsunami of December 26, 2004 has greatly increased recognition of the need for water level data both from the coasts and the deep-ocean. In 2006, the National Oceanic and Atmospheric Administration (NOAA) completed a Tsunami Data Management Report describing the management of data required to minimize the impact of tsunamis in the United States. One of the major gaps defined in this report is the access to global coastal water level data. NOAA's National Geophysical Data Center (NGDC) and National Climatic Data Center (NCDC) are working cooperatively to bridge this gap. NOAA relies on a network of global data, acquired and processed in real-time to support tsunami detection and warning, as well as high-quality global databases of archived data to support research and advanced scientific modeling. In 2005, parties interested in enhancing the access and use of sea level station data united under the NOAA NCDC's Integrated Data and Environmental Applications (IDEA) Center's Pacific Region Integrated Data Enterprise (PRIDE) program to develop a distributed metadata system describing sea level stations (Kari et. al., 2006; Marra et.al., in press). This effort started with pilot activities in a regional framework and is targeted at tsunami detection and warning systems being developed by various agencies. It includes development of the components of a prototype sea level station metadata web service and accompanying Google Earth-based client application, which use an XML-based schema to expose, at a minimum, information in the NOAA National Weather Service (NWS) Pacific Tsunami Warning Center (PTWC) station database needed to use the PTWC's Tide Tool application. As identified in the Tsunami Data Management Report, the need also exists for long-term retention of the sea level station data. NOAA envisions that the retrospective water level data and metadata will also be available through web services, using an XML-based schema. Five high

  2. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into

  3. Evaluation of the content coverage of SNOMED-CT to represent ICNP Version 1 catalogues.

    PubMed

    Park, Hyeoun-Ae; Lundberg, Cynthia B; Coenen, Amy; Konicek, Debra J

    2009-01-01

    To evaluate the ability of SNOMED-CT (Systematized Nomenclature of Medicine Clinical Terms) to represent ICNP (International Classification for Nursing Practice) nursing diagnosis and intervention catalogue concepts. We selected the 194 nursing diagnosis and 139 nursing intervention catalogue statements from ICNP Version 1.0. From June 2007 through December 2007, the first author mapped the ICNP catalogue concepts to SNOMED-CT using Apelon's TermWorks and CLUE browser 5.0. The second and fourth authors from SNOMED Terminology Solutions and the third author from ICN validated the mapping result. SNOMED-CT covered 172 concepts of 194 nursing diagnosis and 136 concepts of 139 nursing intervention catalogue concepts. SNOMED-CT can represent most (92.5%) of the ICNP nursing diagnosis and intervention catalogue concepts. Improvements to synonymy and adding missing concepts would lead to greater coverage of nursing diagnosis and intervention catalogue concepts.

  4. Astrophysical Aspects of the Contents of the Input Catalogue

    NASA Astrophysics Data System (ADS)

    Egret, D.; Gómez, E.

    1985-08-01

    The authors illustrate some aspects of the astrophysical contents of the HIPPARCOS Input Catalogue through the comparison of the complete merged file L3 with the file L3/SIM obtained as a result from the first simulation. The following astrophysical topics are considered: the upper part of the HR diagram; the red nearby stars; the distance scale; some problems of galactic structure; old population stars; binary stars with known orbit. The analysis shows, besides a general satisfactory percentage of inclusion for most important programs, an important limitation affecting the faintest red stars, good candidate for parallaxes, but penalized by the observing strategy.

  5. SphinX catalogue of small flares and brightenings

    NASA Astrophysics Data System (ADS)

    Gryciuk, Magdalena; Sylwester, Janusz; Gburek, Szymon; Siarkowski, Marek; Mrozek, Tomasz; Kepa, Anna

    The Solar Photometer in X-rays (SphinX) was designed to measure soft X-ray solar emission in the energy range between 1 keV and 15 keV. The instrument operated from February until November 2009 aboard CORONAS-Photon satellite, during the phase of extraordinary low minimum of solar activity. Thanks to its very high sensitivity SphinX was able to record large number of tiny flares and brightenings. A catalogue of events observed by SphinX will be presented. Results of statistical analysis of events’ characteristics will be discussed.

  6. Catalogue of geoidal variations for simple seafloor topographic features

    NASA Technical Reports Server (NTRS)

    Bowin, C.

    1975-01-01

    A catalogue is presented of theoretical geoidal variations for three types of structural features common to the earth's surface: seamounts, submarine ridges, and submarine trenches. These structures were simulated by simple geometric shapes modeled in three-dimensions. A computer program calculated the potential and gravitational variations over the models. Profile plots of geoidal variations and free-air gravity anomalies are presented over cross-sections of the structures. A ready reference information set is provided for comparison with satellite altimeter data for ocean areas.

  7. The Splatalogue (Spectral Line Catalogue) and Calibase (Calibration Source Database)

    NASA Astrophysics Data System (ADS)

    Markwick-Kemper, Andrew J.; Remijan, A. J.; Fomalont, E.

    2006-06-01

    The next generation of powerful millimeter/submillimeter observatories (ALMA, Herschel) require extensive resources to help identify spectral line transitions and suitable calibration sources. We describe the compilation of a spectral line catalogue and calibration source database. The Calibase is an extensible repository of measurements of radio and submm calibration sources, building on the SMA, PTCS, VLA and VLBA lists. The Splatalogue is a comprehensive transition-resolved compilation of observed, measured and calculated spectral lines. Extending the JPL and CDMS lists, and updating the Lovas/NIST list of observed astrophysical transitions, it adds atomic and recombination lines, template spectra, and is completely VO-compliant, queryable under the IVOA SLAP standard.

  8. ExoData: Open Exoplanet Catalogue exploration and analysis tool

    NASA Astrophysics Data System (ADS)

    Varley, Ryan

    2015-12-01

    ExoData is a python interface for accessing and exploring the Open Exoplanet Catalogue. It allows searching of planets (including alternate names) and easy navigation of hierarchy, parses spectral types and fills in missing parameters based on programmable specifications, and provides easy reference of planet parameters such as GJ1214b.ra, GJ1214b.T, and GJ1214b.R. It calculates values such as transit duration, can easily rescale units, and can be used as an input catalog for large scale simulation and analysis of planets.

  9. Metadata research and design of ocean color remote sensing data based on web service

    NASA Astrophysics Data System (ADS)

    Kang, Yan; Pan, Delu; He, Xianqiang; Wang, Difeng; Chen, Jianyu

    2010-10-01

    The ocean color remote sensing metadata describes the content, quality, condition, and other characteristics of ocean color remote sensing data. Paper presents a metadata standard draft based on XML, and gives the details of main ocean color remote sensing metadata XML elements. The ocean color remote sensing data platform-sharing is in developments as a part of the digital ocean system, on this basis, the ocean color remote sensing metadata directory service system based on web service is put forward, which aims to store and manage the ocean color remote sensing metadata effectively. The metadata of the ocean color remote sensing data become the most important event for the ocean color remote sensing information more retrieved and used.

  10. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  11. Metadata distribution algorithm based on directory hash in mass storage system

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Luo, Dong-jian; Pei, Can-hao

    2008-12-01

    The distribution of metadata is very important in mass storage system. Many storage systems use subtree partition or hash algorithm to distribute the metadata among metadata server cluster. Although the system access performance is improved, the scalability problem is remarkable in most of these algorithms. This paper proposes a new directory hash (DH) algorithm. It treats directory as hash key value, implements a concentrated storage of metadata, and take a dynamic load balance strategy. It improves the efficiency of metadata distribution and access in mass storage system by hashing to directory and placing metadata together with directory granularity. DH algorithm has solved the scalable problems existing in file hash algorithm such as changing directory name or permission, adding or removing MDS from the cluster, and so on. DH algorithm reduces the additional request amount and the scale of each data migration in scalable operations. It enhances the scalability of mass storage system remarkably.

  12. VizieR Online Data Catalog: The PASTEL catalogue (Soubiran+, 2016-)

    NASA Astrophysics Data System (ADS)

    Soubiran, C.; Le Campion, J.-F.; Brouillet, N.; Chemin, L.

    2016-03-01

    PASTEL is a bibliographical catalogue compiling determinations of stellar atmospheric parameters. It provides (Teff, logg, [Fe/H]) determinations obtained from detailed analyses of high resolution, high signal to noise spectra, carried out with the help of model atmospheres. It also provides effective temperatures Teff from various methods. PASTEL is regularly updated. The catalogue supersedes the two previous versions of the [Fe/H] catalogue (Cayrel de Strobel et al., 1997 [Cat. III/200], 2001 [Cat. III/221]). (1 data file).

  13. VizieR Online Data Catalog: The PASTEL catalogue (Soubiran+, 2010-)

    NASA Astrophysics Data System (ADS)

    Soubiran, C.; Le Campion, J.-F.; Cayrel de Strobel, G.; Caillo, A.

    2010-01-01

    PASTEL is a bibliographical catalogue compiling determinations of stellar atmospheric parameters. It provides (Teff, logg, [Fe/H]) determinations obtained from detailed analyses of high resolution, high signal to noise spectra, carried out with the help of model atmospheres. It also provides effective temperatures Teff from various methods. PASTEL is regularly updated. The catalogue supersedes the two previous versions of the [Fe/H] catalogue (Cayrel de Strobel et al., 1997 [Cat. III/200], 2001 [Cat. III/221]). (1 data file).

  14. Byzantine medical manuscripts: towards a new catalogue, with a specimen for an annotated checklist of manuscripts based on an index of Diels' Catalogue.

    PubMed

    Touwaide, Alain

    2009-01-01

    Greek manuscripts containing medical texts were inventoried at the beginning of the 20th century by a team of philologists under the direction of Hermann Diels. The resulting catalogue, however useful it was when new and still is today, needs to be updated not only because some manuscripts have been destroyed, certain collections and single items have changed location, new shelfmark systems have been sometimes adopted and cataloguing has made substantial progress, but also because in Diels' time the concept of ancient medicine was limited, the method used in compiling data was not standardized and, in a time of manual recording and handling of information, mistakes could not be avoided. The present article is an introduction to a new catalogue of Greek medical manuscripts. In the first part, it surveys the history of the heuristic and cataloguing of Greek medical manuscripts from the 16th century forward; in the second part, it highlights the problems in Diels' catalogue and describes the genesis and methods of the new catalogue, together with the plan for its completion; and in the third part, it provides a sample of such a new catalogue, with a list of the Greek medical manuscripts in the libraries of the United Kingdom and Ireland. PMID:20349553

  15. Design and Practice on Metadata Service System of Surveying and Mapping Results Based on Geonetwork

    NASA Astrophysics Data System (ADS)

    Zha, Z.; Zhou, X.

    2011-08-01

    Based on the analysis and research on the current geographic information sharing and metadata service,we design, develop and deploy a distributed metadata service system based on GeoNetwork covering more than 30 nodes in provincial units of China.. By identifying the advantages of GeoNetwork, we design a distributed metadata service system of national surveying and mapping results. It consists of 31 network nodes, a central node and a portal. Network nodes are the direct system metadata source, and are distributed arround the country. Each network node maintains a metadata service system, responsible for metadata uploading and management. The central node harvests metadata from network nodes using OGC CSW 2.0.2 standard interface. The portal shows all metadata in the central node, provides users with a variety of methods and interface for metadata search or querying. It also provides management capabilities on connecting the central node and the network nodes together. There are defects with GeoNetwork too. Accordingly, we made improvement and optimization on big-amount metadata uploading, synchronization and concurrent access. For metadata uploading and synchronization, by carefully analysis the database and index operation logs, we successfully avoid the performance bottlenecks. And with a batch operation and dynamic memory management solution, data throughput and system performance are significantly improved; For concurrent access, , through a request coding and results cache solution, query performance is greatly improved. To smoothly respond to huge concurrent requests, a web cluster solution is deployed. This paper also gives an experiment analysis and compares the system performance before and after improvement and optimization. Design and practical results have been applied in national metadata service system of surveying and mapping results. It proved that the improved GeoNetwork service architecture can effectively adaptive for distributed deployment

  16. Evaluation of the completeness and accuracy of an earthquake catalogue based on hydroacoustic monitoring

    NASA Astrophysics Data System (ADS)

    Willemann, R. J.

    2002-12-01

    NOAA's Pacific Marine Environment Laboratory (PMEL) produces a catalogue of Pacific Ocean earthquakes based on hydroacoustic monitoring from April 1996. The International Seismological Centre (ISC) worked without referring to the PMEL catalogue for earthquakes through April 2000, so the ISC and PMEL catalogues are independent until then. The PMEL catalogue includes many more intraplate and mid-ocean ridge earthquakes; more than 20 times as many earthquakes as the ISC catalogue in some areas. In some areas ISC earthquakes are nearly a strict subset PMEL earthquakes, but elsewhere many ISC earthquakes are not in the PMEL catalogue. Along the Pacific-Antarctic Plate Boundary (45°-70°S, 110°-180°W), for example, the PMEL catalogue misses out many ISC earthquakes, including a few MW(Harvard)>5 crustal earthquakes. Near the Cocos Ridge (2°-7°N, 81°-88°E) for many of the earthquakes in each catalogue, there is no corresponding earthquake in the other. Among earthquakes that are in both catalogues, location differences may be much greater than the formal location uncertainties. But formal errors are known to underestimate true location errors, so studying the seismic arrival time residuals with respect to the hydroacoucoustic origins and hydroacoustic arrival times residuals with respect to the seismic origins provides a more rigorous evaluation of the intrinsic differences between these two monitoring technologies.

  17. Lorentzian’ analysis of the accuracy of modern catalogues of stellar positions

    NASA Astrophysics Data System (ADS)

    Varaksina, N. Y.; Nefedyev, Y. A.; Churkin, K. O.; Zabbarova, R. R.; Demin, S. A.

    2015-12-01

    There is a new approach for the estimation of the position accuracy and proper motions of the stars in astrometric catalogues by comparison of the stars' positions in the researched and Hipparcos catalogues in different periods, but under a standard equinox. To verify this method was carried out the analysis of the star positions and proper motions UCAC2, PPM, ACRS, Tycho-2, ACT, TRC, FON and Tycho catalogues. As a result of this study was obtained that the accuracy of positions and proper motions of the stars in Tycho-2 and UCAC2 catalogues are approximately equal. The results of the comparison are represented graphically.

  18. Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example

    SciTech Connect

    Devarakonda, Ranjeet; Hook, Leslie A; Killeffer, Terri S; Krassovski, Misha B; Boden, Thomas A; Wullschleger, Stan D

    2015-01-01

    The Online Metadata Editor (OME) is a web-based tool to help document scientific data in a well-structured, popular scientific metadata format. In this paper, we will discuss the newest tool that Oak Ridge National Laboratory (ORNL) has developed to generate, edit, and manage metadata and how it is helping data-intensive science centers and projects, such as the U.S. Department of Energy s Next Generation Ecosystem Experiments (NGEE) in the Arctic to prepare metadata and make their big data produce big science and lead to new discoveries.

  19. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and

  20. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  1. Empowering Earth Science Communities to Share Data Through Guided Metadata Improvement

    NASA Astrophysics Data System (ADS)

    Powers, L. A.; Habermann, T.; Jones, M. B.; Gordon, S.

    2015-12-01

    Earth Science communities can improve the discoverability, use and understanding of their data by improving the completeness and consistency of their metadata. Despite the potential for a great payoff, resources to invest in this work are often limited. We are working with diverse earth science communities to quantitatively evaluate their metadata and to identify specific strategies to improve the completeness and consistency of their metadata. We have developed an iterative, guided process intended to efficiently improve metadata to better serve their own communities, as well as share data across disciplines. The community specific approach focuses on community metadata requirements, and also provides guidance on adding other metadata concepts to expand the effectiveness of metadata for multiple uses, including data discovery, data understanding, and data re-use. We will present the results of a baseline analysis of more than 25 diverse metadata collections from established data repositories representing communities across the earth and environmental sciences. The baseline analysis describes the current state of the metadata in these collections and highlights areas for improvement. We compare these collections to demonstrate exemplar practitioners that can provide guidance to other communities.

  2. Studies of Big Data metadata segmentation between relational and non-relational databases

    NASA Astrophysics Data System (ADS)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  3. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    NASA Astrophysics Data System (ADS)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  4. Quantum Replicator Dynamics

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban

    2006-09-01

    We propose quantization relationships which would let us describe and solution problems originated by conflicting or cooperative behaviors among the members of a system from the point of view of quantum mechanical interactions. The quantum version of the replicator dynamics is the equation of evolution of mixed states from quantum statistical mechanics. A system and all its members will cooperate and rearrange its states to improve their present condition. They strive to reach the best possible state for each of them which is also the best possible state for the whole system. This led us to propose a quantum equilibrium in which a system is stable only if it maximizes the welfare of the collective above the welfare of the individual. If it is maximized the welfare of the individual above the welfare of the collective the system gets unstable and eventually it collapses.

  5. Creation of Large Catalogues by Using of Virtual Observatories

    NASA Astrophysics Data System (ADS)

    Protsyuk, Yu. I.; Kovalchuk, O. M.

    We developed an application program tosearch images in the registers and databases of Virtual Observatories and to download them to local computer. The program has the ability to process XML file in VO Table format to generate links to images, as well as to work directly with the astronomical servers. To improve the efficiency for downloading of large number of images, we used multi-threaded mode. The program runs under the Windows operating system. Using the program in 2014 year, we found and downloaded more than 145 thousand of images of open clusters, having total volume of about 300 GB. Total download time was about 7 days. To process the downloaded images, we created and configured a complex of 10 virtual machines on two PCs for parallel image processing by using Astrometrica program. Total processing time was about 14 days. An application program was also created to analyse the obtained results, which were used to create four catalogues of stellar coordinates at the average epoch of 1953 to 1998. The total number of stars in the catalogues is more than 35 million. The standard error is 0.04" to 0.07", and the average number of observations is 4 to 6. The catalogs are used to improve proper motions of stars in and around of open clusters.

  6. Architectural Historical Heritage: a Tridimensional Multilayers Cataloguing Method

    NASA Astrophysics Data System (ADS)

    Calisi, D.; Tommasetti, A.; Topputo, R.

    2011-09-01

    In the Future the digital filing system will be the method for storing and cataloguing heritages, private assets and arts collections. Today this elaborate process is confined only to the library, painting or parietal heritage. What is missing is a digitalized acquisition of the architectural heritage, which is described at multiple levels of representation. Taking a critical look at the urban setting until you reach the single buildings in their complexity, there is a clear need to establish an open and up-to-date system in order to communicate the different degrees of interaction with the architectural elements that must be preserved and accessed to like a work of art. The breakdown and cataloguing at tridimensional levels affects the different scales of the representation of the city at the stage of stimulating and interactive fruition for those users interested in historical and cognitive research and at the stage of active and project implementation well. The hierarchy of layers of data storage city based should be lived and experienced on a superficial stage as a simple user of the knowledge offered by the digital language of animation and interactivity. It may be the case of a tourist or a citizen who is eager to deepen his awareness of a building, a neighbourhood together with its layering of history and architectural value. This article proposes the development of a database that will be used and extended from time to time with new information related to surveys, projects and restorations of the existing.

  7. A photometric catalogue of southern emission-line stars

    NASA Astrophysics Data System (ADS)

    de Winter, D.; van den Ancker, M. E.; Maira, A.; Thé, P. S.; Djie, H. R. E. Tjin A.; Redondo, I.; Eiroa, C.; Molster, F. J.

    2001-12-01

    We present a catalogue of previously unpublished optical and infrared photometry for a sample of 162 emission-line objects and shell stars visible from the southern hemisphere. The data were obtained between 1978 and 1997 in the Walraven (WULBV), Johnson/Cousins (UBV(RI)c) and ESO and SAAO near-infrared (JHKLM) photometric systems. Most of the observed objects are Herbig Ae/Be (HAeBe) stars or HAeBe candidates appearing in the list of HAeBe candidates of Thé et al. (1994), although several B[e] stars, LBVs and T Tauri stars are also included in our sample. For many of the stars the data presented here are the first photo-electric measurements in the literature. The resulting catalogue consists of 1809 photometric measurements. Optical variability was detected in 66 out of the 116 sources that were observed more than once. 15 out of the 50 stars observed multiple times in the infrared showed variability at 2.2 mu m (K band). Based on observations collected at the European Southern Observatory, La Silla, Chile and on observations collected at the South African Astronomical Observatory. Tables 2-4 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http:/ /cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/380/609

  8. HALOGEN: a tool for fast generation of mock halo catalogues

    NASA Astrophysics Data System (ADS)

    Avila, Santiago; Murray, Steven G.; Knebe, Alexander; Power, Chris; Robotham, Aaron S. G.; Garcia-Bellido, Juan

    2015-06-01

    We present a simple method of generating approximate synthetic halo catalogues: HALOGEN. This method uses a combination of second-order Lagrangian Perturbation Theory (2LPT) in order to generate the large-scale matter distribution, analytical mass functions to generate halo masses, and a single-parameter stochastic model for halo bias to position haloes. HALOGEN represents a simplification of similar recently published methods. Our method is constrained to recover the two-point function at intermediate (10 h-1 Mpc < r < 50 h-1 Mpc) scales, which we show is successful to within 2 per cent. Larger scales (˜100 h-1 Mpc) are reproduced to within 15 per cent. We compare several other statistics (e.g. power spectrum, point distribution function, redshift space distortions) with results from N-body simulations to determine the validity of our method for different purposes. One of the benefits of HALOGEN is its flexibility, and we demonstrate this by showing how it can be adapted to varying cosmologies and simulation specifications. A driving motivation for the development of such approximate schemes is the need to compute covariance matrices and study the systematic errors for large galaxy surveys, which requires thousands of simulated realizations. We discuss the applicability of our method in this context, and conclude that it is well suited to mass production of appropriate halo catalogues. The code is publicly available at https://github.com/savila/halogen.

  9. eGenomics: Cataloguing Our Complete Genome Collection III

    PubMed Central

    Field, Dawn; Garrity, George; Gray, Tanya; Selengut, Jeremy; Sterk, Peter; Thomson, Nick; Tatusova, Tatiana; Cochrane, Guy; Glöckner, Frank Oliver; Kottmann, Renzo; Lister, Allyson L.; Tateno, Yoshio; Vaughan, Robert

    2007-01-01

    This meeting report summarizes the proceedings of the “eGenomics: Cataloguing our Complete Genome Collection III” workshop held September 11–13, 2006, at the National Institute for Environmental eScience (NIEeS), Cambridge, United Kingdom. This 3rd workshop of the Genomic Standards Consortium was divided into two parts. The first half of the three-day workshop was dedicated to reviewing the genomic diversity of our current and future genome and metagenome collection, and exploring linkages to a series of existing projects through formal presentations. The second half was dedicated to strategic discussions. Outcomes of the workshop include a revised “Minimum Information about a Genome Sequence” (MIGS) specification (v1.1), consensus on a variety of features to be added to the Genome Catalogue (GCat), agreement by several researchers to adopt MIGS for imminent genome publications, and an agreement by the EBI and NCBI to input their genome collections into GCat for the purpose of quantifying the amount of optional data already available (e.g., for geographic location coordinates) and working towards a single, global list of all public genomes and metagenomes.

  10. SUMO and KSHV Replication

    PubMed Central

    Chang, Pei-Ching; Kung, Hsing-Jien

    2014-01-01

    Small Ubiquitin-related MOdifier (SUMO) modification was initially identified as a reversible post-translational modification that affects the regulation of diverse cellular processes, including signal transduction, protein trafficking, chromosome segregation, and DNA repair. Increasing evidence suggests that the SUMO system also plays an important role in regulating chromatin organization and transcription. It is thus not surprising that double-stranded DNA viruses, such as Kaposi’s sarcoma-associated herpesvirus (KSHV), have exploited SUMO modification as a means of modulating viral chromatin remodeling during the latent-lytic switch. In addition, SUMO regulation allows the disassembly and assembly of promyelocytic leukemia protein-nuclear bodies (PML-NBs), an intrinsic antiviral host defense, during the viral replication cycle. Overcoming PML-NB-mediated cellular intrinsic immunity is essential to allow the initial transcription and replication of the herpesvirus genome after de novo infection. As a consequence, KSHV has evolved a way as to produce multiple SUMO regulatory viral proteins to modulate the cellular SUMO environment in a dynamic way during its life cycle. Remarkably, KSHV encodes one gene product (K-bZIP) with SUMO-ligase activities and one gene product (K-Rta) that exhibits SUMO-targeting ubiquitin ligase (STUbL) activity. In addition, at least two viral products are sumoylated that have functional importance. Furthermore, sumoylation can be modulated by other viral gene products, such as the viral protein kinase Orf36. Interference with the sumoylation of specific viral targets represents a potential therapeutic strategy when treating KSHV, as well as other oncogenic herpesviruses. Here, we summarize the different ways KSHV exploits and manipulates the cellular SUMO system and explore the multi-faceted functions of SUMO during KSHV’s life cycle and pathogenesis. PMID:25268162

  11. SUMO and KSHV Replication.

    PubMed

    Chang, Pei-Ching; Kung, Hsing-Jien

    2014-01-01

    Small Ubiquitin-related MOdifier (SUMO) modification was initially identified as a reversible post-translational modification that affects the regulation of diverse cellular processes, including signal transduction, protein trafficking, chromosome segregation, and DNA repair. Increasing evidence suggests that the SUMO system also plays an important role in regulating chromatin organization and transcription. It is thus not surprising that double-stranded DNA viruses, such as Kaposi's sarcoma-associated herpesvirus (KSHV), have exploited SUMO modification as a means of modulating viral chromatin remodeling during the latent-lytic switch. In addition, SUMO regulation allows the disassembly and assembly of promyelocytic leukemia protein-nuclear bodies (PML-NBs), an intrinsic antiviral host defense, during the viral replication cycle. Overcoming PML-NB-mediated cellular intrinsic immunity is essential to allow the initial transcription and replication of the herpesvirus genome after de novo infection. As a consequence, KSHV has evolved a way as to produce multiple SUMO regulatory viral proteins to modulate the cellular SUMO environment in a dynamic way during its life cycle. Remarkably, KSHV encodes one gene product (K-bZIP) with SUMO-ligase activities and one gene product (K-Rta) that exhibits SUMO-targeting ubiquitin ligase (STUbL) activity. In addition, at least two viral products are sumoylated that have functional importance. Furthermore, sumoylation can be modulated by other viral gene products, such as the viral protein kinase Orf36. Interference with the sumoylation of specific viral targets represents a potential therapeutic strategy when treating KSHV, as well as other oncogenic herpesviruses. Here, we summarize the different ways KSHV exploits and manipulates the cellular SUMO system and explore the multi-faceted functions of SUMO during KSHV's life cycle and pathogenesis. PMID:25268162

  12. Automated diagnosis of data-model conflicts using metadata.

    PubMed

    Chen, R O; Altman, R B

    1999-01-01

    The authors describe a methodology for helping computational biologists diagnose discrepancies they encounter between experimental data and the predictions of scientific models. The authors call these discrepancies data-model conflicts. They have built a prototype system to help scientists resolve these conflicts in a more systematic, evidence-based manner. In computational biology, data-model conflicts are the result of complex computations in which data and models are transformed and evaluated. Increasingly, the data, models, and tools employed in these computations come from diverse and distributed resources, contributing to a widening gap between the scientist and the original context in which these resources were produced. This contextual rift can contribute to the misuse of scientific data or tools and amplifies the problem of diagnosing data-model conflicts. The authors' hypothesis is that systematic collection of metadata about a computational process can help bridge the contextual rift and provide information for supporting automated diagnosis of these conflicts. The methodology involves three major steps. First, the authors decompose the data-model evaluation process into abstract functional components. Next, they use this process decomposition to enumerate the possible causes of the data-model conflict and direct the acquisition of diagnostically relevant metadata. Finally, they use evidence statically and dynamically generated from the metadata collected to identify the most likely causes of the given conflict. They describe how these methods are implemented in a knowledge-based system called GRENDEL and show how GRENDEL can be used to help diagnose conflicts between experimental data and computationally built structural models of the 30S ribosomal subunit. PMID:10495098

  13. Data and Metadata Management at the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Berriman, G. B.; Holt, J. M.; Mader, J. A.; Tran, H. D.; Goodrich, R. W.; Gelino, C. R.; Laity, A. C.; Kong, M.; Swain, M. A.

    2015-09-01

    A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The data from Keck are not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. The data pose a number of challenges for KOA: different instrument builders used different standards, and the nature of classical observing, where observers have complete control of the instruments and their observations, lead to heterogeneous data sets. For example, it is often difficult to determine if an observation is a science target, a sky frame, or a sky flat. It is also necessary to assign the data to the correct owners and observing programs, which can be a challenge for time-domain and target-of-opportunity observations, or on split nights, during which two or more principle investigators share a given night. In addition, having uniform and adequate calibrations is important for the proper reduction of data. Therefore, KOA needs to distinguish science files from calibration files, identify the type of calibrations available, and associate the appropriate calibration files with each science frame. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and “retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the archive was designed. With the expertise gained from having successfully archived observations taken with all eight currently active instruments at WMKO, we have developed lessons learned from handling this complex array of heterogeneous metadata. These lessons help ensure a smooth ingestion of data not only for current but also future instruments

  14. Metadata and data management for the Keck Observatory Archive

    NASA Astrophysics Data System (ADS)

    Tran, H. D.; Holt, J.; Goodrich, R. W.; Mader, J. A.; Swain, M.; Laity, A. C.; Kong, M.; Gelino, C. R.; Berriman, G. B.

    2014-07-01

    A collaboration between the W. M. Keck Observatory (WMKO) in Hawaii and the NASA Exoplanet Science Institute (NExScI) in California, the Keck Observatory Archive (KOA) was commissioned in 2004 to archive observing data from WMKO, which operates two classically scheduled 10 m ground-based telescopes. The observing data from Keck is not suitable for direct ingestion into the archive since the metadata contained in the original FITS headers lack the information necessary for proper archiving. Coupled with different standards among instrument builders and the heterogeneous nature of the data inherent in classical observing, in which observers have complete control of the instruments and their observations, the data pose a number of technical challenges for KOA. For example, it is often difficult to determine if an observation is a science target, a sky frame, or a sky flat. It is also necessary to assign the data to the correct owners and observing programs, which can be a challenge for time-domain and target-of-opportunity observations, or on split nights, during which two or more principle investigators share a given night. In addition, having uniform and adequate calibrations are important for the proper reduction of data. Therefore, KOA needs to distinguish science files from calibration files, identify the type of calibrations available, and associate the appropriate calibration files with each science frame. We describe the methodologies and tools that we have developed to successfully address these difficulties, adding content to the FITS headers and "retrofitting" the metadata in order to support archiving Keck data, especially those obtained before the archive was designed. With the expertise gained from having successfully archived observations taken with all eight currently active instruments at WMKO, we have developed lessons learned from handling this complex array of heterogeneous metadata that help ensure a smooth ingestion of data not only for current

  15. Capturing Sensor Metadata for Cross-Domain Interoperability

    NASA Astrophysics Data System (ADS)

    Fredericks, J.

    2015-12-01

    Envision a world where a field operator turns on an instrument, and is queried for information needed to create standardized encoded descriptions that, together with the sensor manufacturer knowledge, fully describe the capabilities, limitations and provenance of observational data. The Cross-Domain Observational Metadata Environmental Sensing Network (X-DOMES) pilot project (with support from the NSF/EarthCube IA) is taking the first steps needed in realizing this vision. The knowledge of how an observable physical property becomes a measured observation must be captured at each stage of its creation. Each sensor-based observation is made through the use of applied technologies, each with specific limitations and capabilities. Environmental sensors typically provide a variety of options that can be configured differently for each unique deployment, affecting the observational results. By capturing the information (metadata) at each stage of its generation, a more complete and accurate description of data provenance can be communicated. By documenting the information in machine-harvestable, standards-based encodings, metadata can be shared across disciplinary and geopolitical boundaries. Using standards-based frameworks enables automated harvesting and translation to other community-adopted standards, which facilitates the use of shared tools and workflows. The establishment of a cross-domain network of stakeholders (sensor manufacturers, data providers, domain experts, data centers), called the X-DOMES Network, provides a unifying voice for the specification of content and implementation of standards, as well as a central repository for sensor profiles, vocabularies, guidance and product vetting. The ability to easily share fully described observational data provides a better understanding of data provenance and enables the use of common data processing and assessment workflows, fostering a greater trust in our shared global resources. The X-DOMES Network

  16. Scientific Workflows + Provenance = Better (Meta-)Data Management

    NASA Astrophysics Data System (ADS)

    Ludaescher, B.; Cuevas-Vicenttín, V.; Missier, P.; Dey, S.; Kianmajd, P.; Wei, Y.; Koop, D.; Chirigati, F.; Altintas, I.; Belhajjame, K.; Bowers, S.

    2013-12-01

    The origin and processing history of an artifact is known as its provenance. Data provenance is an important form of metadata that explains how a particular data product came about, e.g., how and when it was derived in a computational process, which parameter settings and input data were used, etc. Provenance information provides transparency and helps to explain and interpret data products. Other common uses and applications of provenance include quality control, data curation, result debugging, and more generally, 'reproducible science'. Scientific workflow systems (e.g. Kepler, Taverna, VisTrails, and others) provide controlled environments for developing computational pipelines with built-in provenance support. Workflow results can then be explained in terms of workflow steps, parameter settings, input data, etc. using provenance that is automatically captured by the system. Scientific workflows themselves provide a user-friendly abstraction of the computational process and are thus a form of ('prospective') provenance in their own right. The full potential of provenance information is realized when combining workflow-level information (prospective provenance) with trace-level information (retrospective provenance). To this end, the DataONE Provenance Working Group (ProvWG) has developed an extension of the W3C PROV standard, called D-PROV. Whereas PROV provides a 'least common denominator' for exchanging and integrating provenance information, D-PROV adds new 'observables' that described workflow-level information (e.g., the functional steps in a pipeline), as well as workflow-specific trace-level information ( timestamps for each workflow step executed, the inputs and outputs used, etc.) Using examples, we will demonstrate how the combination of prospective and retrospective provenance provides added value in managing scientific data. The DataONE ProvWG is also developing tools based on D-PROV that allow scientists to get more mileage from provenance metadata

  17. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    PubMed

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  18. Generation of Multiple Metadata Formats from a Geospatial Data Repository

    NASA Astrophysics Data System (ADS)

    Hudspeth, W. B.; Benedict, K. K.; Scott, S.

    2012-12-01

    The Earth Data Analysis Center (EDAC) at the University of New Mexico is partnering with the CYBERShARE and Environmental Health Group from the Center for Environmental Resource Management (CERM), located at the University of Texas, El Paso (UTEP), the Biodiversity Institute at the University of Kansas (KU), and the New Mexico Geo- Epidemiology Research Network (GERN) to provide a technical infrastructure that enables investigation of a variety of climate-driven human/environmental systems. Two significant goals of this NASA-funded project are: a) to increase the use of NASA Earth observational data at EDAC by various modeling communities through enabling better discovery, access, and use of relevant information, and b) to expose these communities to the benefits of provenance for improving understanding and usability of heterogeneous data sources and derived model products. To realize these goals, EDAC has leveraged the core capabilities of its Geographic Storage, Transformation, and Retrieval Engine (Gstore) platform, developed with support of the NSF EPSCoR Program. The Gstore geospatial services platform provides general purpose web services based upon the REST service model, and is capable of data discovery, access, and publication functions, metadata delivery functions, data transformation, and auto-generated OGC services for those data products that can support those services. Central to the NASA ACCESS project is the delivery of geospatial metadata in a variety of formats, including ISO 19115-2/19139, FGDC CSDGM, and the Proof Markup Language (PML). This presentation details the extraction and persistence of relevant metadata in the Gstore data store, and their transformation into multiple metadata formats that are increasingly utilized by the geospatial community to document not only core library catalog elements (e.g. title, abstract, publication data, geographic extent, projection information, and database elements), but also the processing steps used to

  19. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for

  20. Supercoiling, knotting and replication fork reversal in partially replicated plasmids

    PubMed Central

    Olavarrieta, L.; Martínez-Robles, M. L.; Sogo, J. M.; Stasiak, A.; Hernández, P.; Krimer, D. B.; Schvartzman, J. B.

    2002-01-01

    To study the structure of partially replicated plasmids, we cloned the Escherichia coli polar replication terminator TerE in its active orientation at different locations in the ColE1 vector pBR18. The resulting plasmids, pBR18-TerE@StyI and pBR18-TerE@EcoRI, were analyzed by neutral/neutral two-dimensional agarose gel electrophoresis and electron microscopy. Replication forks stop at the Ter–TUS complex, leading to the accumulation of specific replication intermediates with a mass 1.26 times the mass of non-replicating plasmids for pBR18-TerE@StyI and 1.57 times for pBR18-TerE@EcoRI. The number of knotted bubbles detected after digestion with ScaI and the number and electrophoretic mobility of undigested partially replicated topoisomers reflect the changes in plasmid topology that occur in DNA molecules replicated to different extents. Exposure to increasing concentrations of chloroquine or ethidium bromide revealed that partially replicated topoisomers (CCCRIs) do not sustain positive supercoiling as efficiently as their non-replicating counterparts. It was suggested that this occurs because in partially replicated plasmids a positive ΔLk is absorbed by regression of the replication fork. Indeed, we showed by electron microscopy that, at least in the presence of chloroquine, some of the CCCRIs of pBR18-Ter@StyI formed Holliday-like junction structures characteristic of reversed forks. However, not all the positive supercoiling was absorbed by fork reversal in the presence of high concentrations of ethidium bromide. PMID:11809877

  1. KSHV Genome Replication and Maintenance

    PubMed Central

    Purushothaman, Pravinkumar; Dabral, Prerna; Gupta, Namrata; Sarkar, Roni; Verma, Subhash C.

    2016-01-01

    Kaposi's sarcoma associated herpesvirus (KSHV) or human herpesvirus 8 (HHV8) is a major etiological agent for multiple severe malignancies in immune-compromised patients. KSHV establishes lifetime persistence in the infected individuals and displays two distinct life cycles, generally a prolonged passive latent, and a short productive or lytic cycle. During latent phase, the viral episome is tethered to the host chromosome and replicates once during every cell division. Latency-associated nuclear antigen (LANA) is a predominant multifunctional nuclear protein expressed during latency, which plays a central role in episome tethering, replication and perpetual segregation of the episomes during cell division. LANA binds cooperatively to LANA binding sites (LBS) within the terminal repeat (TR) region of the viral episome as well as to the cellular nucleosomal proteins to tether viral episome to the host chromosome. LANA has been shown to modulate multiple cellular signaling pathways and recruits various cellular proteins such as chromatin modifying enzymes, replication factors, transcription factors, and cellular mitotic framework to maintain a successful latent infection. Although, many other regions within the KSHV genome can initiate replication, KSHV TR is important for latent DNA replication and possible segregation of the replicated episomes. Binding of LANA to LBS favors the recruitment of various replication factors to initiate LANA dependent DNA replication. In this review, we discuss the molecular mechanisms relevant to KSHV genome replication, segregation, and maintenance of latency. PMID:26870016

  2. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for

  3. openPDS: protecting the privacy of metadata through SafeAnswers.

    PubMed

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  4. Metadata Design in the New PDS4 Standards - Something for Everybody

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data

  5. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    PubMed Central

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S.; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  6. The asymmetry of telomere replication contributes to replicative senescence heterogeneity.

    PubMed

    Bourgeron, Thibault; Xu, Zhou; Doumic, Marie; Teixeira, Maria Teresa

    2015-10-15

    In eukaryotes, the absence of telomerase results in telomere shortening, eventually leading to replicative senescence, an arrested state that prevents further cell divisions. While replicative senescence is mainly controlled by telomere length, the heterogeneity of its onset is not well understood. This study proposes a mathematical model based on the molecular mechanisms of telomere replication and shortening to decipher the causes of this heterogeneity. Using simulations fitted on experimental data obtained from individual lineages of senescent Saccharomyces cerevisiae cells, we decompose the sources of senescence heterogeneity into interclonal and intraclonal components, and show that the latter is based on the asymmetry of the telomere replication mechanism. We also evidence telomere rank-switching events with distinct frequencies in short-lived versus long-lived lineages, revealing that telomere shortening dynamics display important variations. Thus, the intrinsic heterogeneity of replicative senescence and its consequences find their roots in the asymmetric structure of telomeres.

  7. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate

  8. Air Quality Community Catalog and Rich Metadata for GEOSS

    NASA Astrophysics Data System (ADS)

    Robinson, E. M.; Husar, R. B.; Falke, S. R.; Habermann, R. E.

    2009-04-01

    The GEOSS Air Quality Community of Practice (CoP) is developing a community catalog and community portals that will facilitate the discovery, access and usage of distributed air quality data. The catalog records contain fields common for all datasets, additional fields using ISO 19115 and a link to DataSpaces for additional, community-contributed metadata. Most fields for data discovery will be extracted from the OGC WMS/WCS GetCapabilities file. DataSpaces, wiki-based web pages, willinclude extended metadata, lineage and information for better understanding of the data. The value of the DataSpaces comes from the ability to connect the dataset community: users, mediators and providers through user feedback, discussion and other community contributed content. The community catalog will be harvested through the GEOSS Common Infrastructure (GCI) and the GEO and community portals will facilitate finding and distributing AQ datasets. The Air Quality Community Catalog and Portal components are currently being tested as part of the GEOSS Architecture Implementation Pilot - II (AIP-II).

  9. ARIADNE: a Tracking System for Relationships in LHCb Metadata

    NASA Astrophysics Data System (ADS)

    Shapoval, I.; Clemencic, M.; Cattaneo, M.

    2014-06-01

    The data processing model of the LHCb experiment implies handling of an evolving set of heterogeneous metadata entities and relationships between them. The entities range from software and databases states to architecture specificators and software/data deployment locations. For instance, there is an important relationship between the LHCb Conditions Database (CondDB), which provides versioned, time dependent geometry and conditions data, and the LHCb software, which is the data processing applications (used for simulation, high level triggering, reconstruction and analysis of physics data). The evolution of CondDB and of the LHCb applications is a weakly-homomorphic process. It means that relationships between a CondDB state and LHCb application state may not be preserved across different database and application generations. These issues may lead to various kinds of problems in the LHCb production, varying from unexpected application crashes to incorrect data processing results. In this paper we present Ariadne - a generic metadata relationships tracking system based on the novel NoSQL Neo4j graph database. Its aim is to track and analyze many thousands of evolving relationships for cases such as the one described above, and several others, which would otherwise remain unmanaged and potentially harmful. The highlights of the paper include the system's implementation and management details, infrastructure needed for running it, security issues, first experience of usage in the LHCb production and potential of the system to be applied to a wider set of LHCb tasks.

  10. Sharing Images Intelligently: The Astronomical Visualization Metadata Standard

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Christensen, L.; Gauthier, A.

    2006-12-01

    The astronomical education and public outreach (EPO) community plays a key role in conveying the results of scientific research to the general public. A key product of EPO development is a variety of non-scientific public image resources, both derived from scientific observations and created as artistic visualizations of scientific results. This refers to general image formats such as JPEG, TIFF, PNG, GIF, not scientific FITS datasets. Such resources are currently scattered across the internet in a variety of galleries and archives, but are not searchable in any coherent or unified way. Just as Virtual Observatory standards open up all data archives to a common query engine, the EPO community will benefit greatly from a similar mechanism for image search and retrieval. A new standard has been developed for astronomical imagery defining a common set of content fields suited for the needs of astronomical visualizations. This encompasses images derived from data, artist's conceptions, simulations, photography, and can be ultimately extensible to video products. The first generation of tools are now available to tag images with this metadata, which can be embedded with the image file using an XML-based format that functions similarly to a FITS header. As image collections are processed to include astronomy visualization metadata tags, extensive information providing educational context, credits, data sources, and even coordinate information will be readily accessible for uses spanning casual browsing, publication, and interactive media systems.

  11. Aggregating Metadata from Multiple Archives: a Non-VO Approach

    NASA Astrophysics Data System (ADS)

    Gwyn, S. D. J.

    2015-09-01

    The Solar System Object Image Search (SSOIS) tool at the Canadian Astronomy Data Centre allows users to search for images of moving objects taken with a large number of ground-based and space-based telescopes. (Gwyn et al. 2012). The ever-growing list of telescopes includes HST, Subaru, CFHT, Gemini, SDSS, AAT, NEAT, NOAO, WISE and the ING and ESO telescopes. The first step in constructing SSOIS is to agregate the metadata from the various archives. An effective search requires the RA, Dec, and time of the exposure, and the field of view of the instrument. The archives are extremely hetergeneous; in some cases the interface dates back to the 1990s. After scraping these archives, four lessons have been learned: 1) The more primitive the archive, the easier it is to scrape. 2) Simple Image Access Protocol (SIAP) is not an effective means of scraping archives. 3) When scraping an archive with multiple queries, the queries should be done by time rather by than sky position. 4) Retrieving the metadata is relatively easy, the hard work is in the quality control and understanding each telescope/instrument.

  12. Automatic computed tomography patient dose calculation using DICOM header metadata.

    PubMed

    Jahnen, A; Kohler, S; Hermen, J; Tack, D; Back, C

    2011-09-01

    The present work describes a method that calculates the patient dose values in computed tomography (CT) based on metadata contained in DICOM images in support of patient dose studies. The DICOM metadata is preprocessed to extract necessary calculation parameters. Vendor-specific DICOM header information is harmonized using vendor translation tables and unavailable DICOM tags can be completed with a graphical user interface. CT-Expo, an MS Excel application for calculating the radiation dose, is used to calculate the patient doses. All relevant data and calculation results are stored for further analysis in a relational database. Final results are compiled by utilizing data mining tools. This solution was successfully used for the 2009 CT dose study in Luxembourg. National diagnostic reference levels for standard examinations were calculated based on each of the countries' hospitals. The benefits using this new automatic system saved time as well as resources during the data acquisition and the evaluation when compared with earlier questionnaire-based surveys. PMID:21831868

  13. Metadata in Multiple Dialects and the Rosetta Stone

    NASA Astrophysics Data System (ADS)

    Habermann, T.; Monteleone, K.; Armstrong, E. M.; White, B.

    2012-12-01

    As data are shared across multiple communities and re-used in unexpected ways, it is critical to be able to share metadata about who collected and stewarded the data; where the data are available; how the data were collected and processed; and, how they were used in the past. It is even more critical that the new tools can access this information and present it in ways that new users can understand and, if necessary, integrate into their analyses. Unfortunately, as communities develop and use conventions for these metadata, it becomes more and more difficult to share them across community boundaries. This is true even though these conventions are really dialects of a common documentation language that share many important concepts. Breaking down these barriers requires developing community consensus about these concepts and tools for translating between common representations. Ontologies and connections between them have been used to address this problem across datasets from multiple disciplines. Can these tools help solve similar problems with documentation?

  14. Cataloguing E-Books in UK Higher Education Libraries: Report of a Survey

    ERIC Educational Resources Information Center

    Belanger, Jacqueline

    2007-01-01

    Purpose: The purpose of this paper is to discuss the results of a 2006 survey of UK Higher Education OPACs in order to provide a snapshot of cataloguing practices for e-books. Design/methodology/approach: The OPACs of 30 UK HE libraries were examined in July/August 2006 to determine which e-books were catalogued, and the level of cataloguing…

  15. The Canadian Environmental Education Catalogue: A Guide to Selected Resources and Materials. Premier Edition.

    ERIC Educational Resources Information Center

    Heinrichs, Wally; And Others

    Despite their large numbers, environmental education resources can be difficult to find. The purpose of this catalogue is to broaden the awareness of available resources among educators and curriculum developers and facilitate their accessibility. This first edition of the catalogue contains approximately 1,200 of the more than 4,000 titles that…

  16. An extensive catalogue of early-type galaxies in the nearby Universe

    NASA Astrophysics Data System (ADS)

    Dabringhausen, J.; Fellhauer, M.

    2016-08-01

    We present a catalogue of 1715 early-type galaxies from the literature, spanning the luminosity range from faint dwarf spheroidal galaxies to giant elliptical galaxies. The aim of this catalogue is to be one of the most comprehensive and publicly available collections of data on early-type galaxies. The emphasis in this catalogue lies on dwarf elliptical galaxies, for which some samples with detailed data have been published recently. For almost all of the early-type galaxies included in it, this catalogue contains data on their locations, distances, redshifts, half-light radii, the masses of their stellar populations and apparent magnitudes in various passbands. Data on metallicity and various colours are available for a majority of the galaxies presented here. The data on magnitudes, colours, metallicities and masses of the stellar populations are supplemented with entries that are based on fits to data from simple stellar population models and existing data from observations. Also, some simple transformations have been applied to the data on magnitudes, colours and metallicities in this catalogue, in order to increase the homogeneity of these data. Estimates on the Sérsic profiles, internal velocity dispersions, maximum rotational velocities, dynamical masses and ages are listed for several hundreds of the galaxies in this catalogue. Finally, each quantity listed in this catalogue is accompanied with information on its source, so that users of this catalogue can easily exclude data that they do not consider as reliable enough for their purposes.

  17. VizieR Online Data Catalog: Galaxies in the Tycho-2 catalogue (Metz+, 2004)

    NASA Astrophysics Data System (ADS)

    Metz, M.; Geffert, M.

    2004-01-01

    We have identified galaxies in the Tycho-2 Catalogue (Cat. ) by a position cross correlation with galaxy catalogues. In the table we give the cross references to the PGC (Cat. ) and the Tycho-2 positions of the galaxies. (1 data file).

  18. Catalogue Use by the Petherick Readers of the National Library of Australia

    ERIC Educational Resources Information Center

    Hider, Philip

    2007-01-01

    An online questionnaire survey was distributed amongst the Petherick Readers of the National Library of Australia, a user group of scholars and researchers. The survey asked questions about the readers' use and appreciation of the NLA catalogue. This group of users clearly appreciated the library catalogue and demonstrated that there are still…

  19. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    ERIC Educational Resources Information Center

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  20. Characterization of Educational Resources in e-Learning Systems Using an Educational Metadata Profile

    ERIC Educational Resources Information Center

    Solomou, Georgia; Pierrakeas, Christos; Kameas, Achilles

    2015-01-01

    The ability to effectively administrate educational resources in terms of accessibility, reusability and interoperability lies in the adoption of an appropriate metadata schema, able of adequately describing them. A considerable number of different educational metadata schemas can be found in literature, with the IEEE LOM being the most widely…

  1. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    ERIC Educational Resources Information Center

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  2. Manifestations of Metadata: From Alexandria to the Web--Old is New Again

    ERIC Educational Resources Information Center

    Kennedy, Patricia

    2008-01-01

    This paper is a discussion of the use of metadata, in its various manifestations, to access information. Information management standards are discussed. The connection between the ancient world and the modern world is highlighted. Individual perspectives are paramount in fulfilling information seeking. Metadata is interpreted and reflected upon in…

  3. NSIDC Metadata Improvements: Building the foundation for interoperability, discovery, and services

    NASA Astrophysics Data System (ADS)

    Leon, A.; Collins, J. A.; Billingsley, B. W.; Jasiak, E.

    2011-12-01

    The National Snow and Ice Data Center (NSIDC) is actively engaged in efforts to improve metadata acquisition, creation, management, and dissemination. We are replacing a collection of historical databases with an enterprise database (EDB) to manage file and service metadata critical to NSIDC's continued advancement in the areas of data management, stewardship, discovery, analysis and dissemination. Leveraging PostGIS and the ISO 19115 metadata standards, the database will serve as the authoritative, consistent, and extensible representation of NSIDC data holdings. The EDB will support multiple applications, and these applications may present interfaces designed for either human or machine interaction. To serve in this critical role, the content of the EDB must be valid and reliable. Current efforts are focused on developing a user interface to support the input and maintenance of metadata content. Future efforts will include the addition of automated (batch) metadata ingest. Ultimately, the EDB content and services built to interface with the EDB will be leveraged to automate and improve our existing metadata workflows. A solid metadata foundation is critical to the advancement of discovery and services. Building upon a well established standard, like ISO 19115, enables efficient translation to various metadata schemas, support of rich user interfaces, and promotes interoperability with external data services.

  4. Inferring Metadata for a Semantic Web Peer-to-Peer Environment

    ERIC Educational Resources Information Center

    Brase, Jan; Painter, Mark

    2004-01-01

    Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…

  5. Contextual Classification in the Metadata Object Manager (M.O.M.).

    ERIC Educational Resources Information Center

    Pole, Thomas

    1999-01-01

    Defines the contextual classification model, comparing it to the traditional metadata models from which it evolved. Using the MetaData Object Manager (M.O.M) as an example, discusses the use of Contextual Classification in developing this system, and the organizational, performance and reliability advantages of using an external (to the data…

  6. Application of Dublin Core Metadata in the Description of Digital Primary Sources in Elementary School Classrooms.

    ERIC Educational Resources Information Center

    Gilliland-Swetland, Anne J.; Kafai, Yasmin B.; Landis, William E.

    2000-01-01

    Reports on the results of research examining the ability of fourth and fifth grade science and social science students to use Dublin Core metadata elements to describe image resources for inclusion in a digital archive. Describes networked learning environments called Digital Portfolio Archives and discusses metadata for historical primary…

  7. Organizing Scientific Data Sets: Studying Similarities and Differences in Metadata and Subject Term Creation

    ERIC Educational Resources Information Center

    White, Hollie C.

    2012-01-01

    Background: According to Salo (2010), the metadata entered into repositories are "disorganized" and metadata schemes underlying repositories are "arcane". This creates a challenging repository environment in regards to personal information management (PIM) and knowledge organization systems (KOSs). This dissertation research is…

  8. A taxonomic catalogue of Japanese nemerteans (phylum Nemertea).

    PubMed

    Kajihara, Hiroshi

    2007-04-01

    A literature-based taxonomic catalogue of the nemertean species (Phylum Nemertea) reported from Japanese waters is provided, listing 19 families, 45 genera, and 120 species as valid. Applications of the following species names to forms previously recorded from Japanese waters are regarded as uncertain: Amphiporus cervicalis, Amphiporus depressus, Amphiporus lactifloreus, Cephalothrix filiformis, Cephalothrix linearis, Cerebratulus fuscus, Lineus vegetus, Lineus bilineatus, Lineus gesserensis, Lineus grubei, Lineus longifissus, Lineus mcintoshii, Nipponnemertes pulchra, Oerstedia venusta, Prostoma graecense, and Prostoma grande. The identities of the taxa referred to by the following four nominal species require clarification through future investigations: Cosmocephala japonica, Dicelis rubra, Dichilus obscurus, and Nareda serpentina. The nominal species established from Japanese waters are tabulated. In addition, a brief history of taxonomic research on Japanese nemerteans is reviewed.

  9. Revealing pulsars hidden in the 2nd Fermi Catalogue

    NASA Astrophysics Data System (ADS)

    Pavlov, George

    2012-09-01

    We propose a mini-survey of unclassified Fermi sources from the 2FGL catalogue. Using an intelligent parameter selection, we have identified a sub-sample that is likely to be dominated by pulsars. We aim to identify 8 new gamma-ray pulsars and their X-ray counterparts, and thus significantly increase the population of pulsars detected in both gamma- and X-rays. The existing limited data hint at an intriguing change in the slope of the L(Edot) dependence at Edot~1e35-36 erg/s, both in X-rays and gamma-rays. By identifying more pulsars in both gamma- and X-rays, especially at lower Edot, we will be able to confirm and study such breaks and relationships. We will also find new X-ray bright pulsars suitable for detailed study.

  10. Hipparcos to deliver its final results catalogue soon

    NASA Astrophysics Data System (ADS)

    1995-10-01

    them, almost 30 years ago, to propose carrying out these observations from the relatively benign environment of space. Hipparcos is, by present standards, a medium-sized satellite, with a 30 cm telescope sensing simply ordinary light. But it has been described as the most imaginative in the short history of space astronomy. This foresight has been amply repaid. In the long history of stargazing it ranks with the surveys by Hipparchus the Greek in the 2nd Century BC and by Tichy Brahe the Dane in the 16th Century AD, both of which transformed human perceptions of the Universe. Positions derived from the Hipparcos satellite are better than a millionth of a degree, and newly a thousand times more accurate than star positions routinely determined from he ground. This accuracy makes it possible to measure directly the distances to the stars. While it took 250 years between astronomers first setting out on the exacting task of measuring the distance to a star, and a stellar distance being measured for the first time, ESA's Hipparcos mission has revolutionised this long, painstaking, and fundamental task by measuring accurate distances and movements of more than one hundred thousand. The measurement concept involved he satellite triangulating its way between he stars all wound the sky, building up a celestial map in much the same way as land surveyors use triangulation between hill-tops to measure distances accurately. Only the angles involved are much smaller : the accuracy that has been achieved with the Hipparcos Catalogue is such that he two edges of a coin, viewed from he other side of the Atlantic Ocean, could be distinguished. The results from Hipparcos will deliver scientists with long-awaited details of our place in he Milky Way Galaxy. Most of he stars visible to the naked eye are, to a large extent, companions of the Sun, in a great orbital march around the centre of the Galaxy, a journey so long that it takes individual stars 250 million years to complete, in

  11. [Ontario Transportation Technology and Energy Branch]. Publications catalogue

    SciTech Connect

    1993-12-31

    The Ontario Transportation Technology and Energy Branch was established as the provincial government`s focus for research and development in transportation technology and energy areas that will lead to new and improved products and services. This catalogue lists reports published by the Branch up to February 1993, excluding reports that are extremely outdated, reports superseded by later ones, and reports intended for internal use only. Videos produced by the Branch are also included. Information provided for each report includes title, report number, authors, summary of contents, participating agencies in the research, and publisher. Entries are arranged under such headings as alternative fuels, automotive technology/emissions, commercial vehicles, ride sharing, municipal energy, rail technology, traffic and decisions systems, transportation control technology, and transportation energy management.

  12. Catalogue of Risks: Natural, Technical, Social and Health Risks

    NASA Astrophysics Data System (ADS)

    Ebi, Kristie L.

    2009-01-01

    Financial, geophysical, and terrorist-related disasters have been headline news in the past few months. As amply demonstrated on a regular basis, the recognition and evaluation of risks are skills that could be more widespread. As such, Proske's Catalogue of Risks is timely and of potential interest. The book is a revised and expanded version of an earlier German publication that aims to provide an encyclopedic discussion of issues related to risks and disasters, with a goal of facilitating an understanding of the components and assessment of risk. The book includes chapters that discuss the difficulty of coming to a consensus on a definition of risk, a comprehensive range of risks and disasters, objective risk measures, subjective risk judgment, quality of life measures, and legal aspects of risk. The book ends with an example of applying the concepts discussed to ship impacts against bridges.

  13. Documentation for the machine-readable character coded version of the SKYMAP catalogue

    NASA Technical Reports Server (NTRS)

    Warren, W. H., Jr.

    1981-01-01

    The SKYMAP catalogue is a compilation of astronomical data prepared primarily for purposes of attitude guidance for satellites. In addition to the SKYMAP Master Catalogue data base, a software package of data base management and utility programs is available. The tape version of the SKYMAP Catalogue, as received by the Astronomical Data Center (ADC), contains logical records consisting of a combination of binary and EBCDIC data. Certain character coded data in each record are redundant in that the same data are present in binary form. In order to facilitate wider use of all SKYMAP data by the astronomical community, a formatted (character) version was prepared by eliminating all redundant character data and converting all binary data to character form. The character version of the catalogue is described. The document is intended to fully describe the formatted tape so that users can process the data problems and guess work; it should be distributed with any character version of the catalogue.

  14. Charter School Replication. Policy Guide

    ERIC Educational Resources Information Center

    Rhim, Lauren Morando

    2009-01-01

    "Replication" is the practice of a single charter school board or management organization opening several more schools that are each based on the same school model. The most rapid strategy to increase the number of new high-quality charter schools available to children is to encourage the replication of existing quality schools. This policy guide…

  15. A new reference global instrumental earthquake catalogue (1900-2009)

    NASA Astrophysics Data System (ADS)

    Di Giacomo, D.; Engdahl, B.; Bondar, I.; Storchak, D. A.; Villasenor, A.; Bormann, P.; Lee, W.; Dando, B.; Harris, J.

    2011-12-01

    For seismic hazard studies on a global and/or regional scale, accurate knowledge of the spatial distribution of seismicity, the magnitude-frequency relation and the maximum magnitudes is of fundamental importance. However, such information is normally not homogeneous (or not available) for the various seismically active regions of the Earth. To achieve the GEM objectives (www.globalquakemodel.org) of calculating and communicating earthquake risk worldwide, an improved reference global instrumental catalogue for large earthquakes spanning the entire 100+ years period of instrumental seismology is an absolute necessity. To accomplish this task, we apply the most up-to-date techniques and standard observatory practices for computing the earthquake location and magnitude. In particular, the re-location procedure benefits both from the depth determination according to Engdahl and Villaseñor (2002), and the advanced technique recently implemented at the ISC (Bondár and Storchak, 2011) to account for correlated error structure. With regard to magnitude, starting from the re-located hypocenters, the classical surface and body-wave magnitudes are determined following the new IASPEI standards and by using amplitude-period data of phases collected from historical station bulletins (up to 1970), which were not available in digital format before the beginning of this work. Finally, the catalogue will provide moment magnitude values (including uncertainty) for each seismic event via seismic moment, via surface wave magnitude or via other magnitude types using empirical relationships. References Engdahl, E.R., and A. Villaseñor (2002). Global seismicity: 1900-1999. In: International Handbook of Earthquake and Engineering Seismology, eds. W.H.K. Lee, H. Kanamori, J.C. Jennings, and C. Kisslinger, Part A, 665-690, Academic Press, San Diego. Bondár, I., and D. Storchak (2011). Improved location procedures at the International Seismological Centre, Geophys. J. Int., doi:10.1111/j

  16. SCEC Community Modeling Environment (SCEC/CME) - Data and Metadata Management Issues

    NASA Astrophysics Data System (ADS)

    Minster, J.; Faerman, M.; Ely, G.; Maechling, P.; Gupta, A.; Xin, Q.; Kremenek, G.; Shkoller, B.; Olsen, K.; Day, S.; Moore, R.

    2003-12-01

    One of the goals of the SCEC Community Modeling Environment is to facilitate the execution of substantial collections of large numerical simulations. Since such simulations are resource-intensive, and can generate extremely large outputs, implementing this concept raises a host of data and metadata management challenges. Due to the high computational cost involved in running these simulations, one must balance the cost of repeating such simulations against the burden of archiving the produced datasets making them accessible for future use such as post processing or visualization, without the need of re-computation. Further, a carefully selected collection of such data sets might be used as benchmarks for assessing accuracy and performance of future simulations, developing post-processing software such as visualization tools, and testing data and metadata management strategies. The problem is rapidly compounded if one contemplates the possibility of computing ensemble averages for simulations of complex nonlinear systems. The definition and organization of a complete set of metadata to describe fully any given simulation is a surprisingly complex task, which we approach from the point of view of developing a community digital library, which provides the means to organize the material, as well as standard metadata attributes. Web-based discovery mechanisms are then used to support browsing and retrieval of data. A key component is the selection of appropriate descriptive metadata. We compare existing metadata standards from the digital library community, federal standards, and discipline specific metadata attributes. The digital library community has developed a standard for organizing metadata, called the Metadata Encoding and Transmission Standard (METS). This schema supports descriptive (provenance), administrative (location), structural (component relationships), and behavioral (display and manipulation applications). The organization can be augmented with

  17. Regulating DNA Replication in Plants

    PubMed Central

    Sanchez, Maria de la Paz; Costas, Celina; Sequeira-Mendes, Joana; Gutierrez, Crisanto

    2012-01-01

    Chromosomal DNA replication in plants has requirements and constraints similar to those in other eukaryotes. However, some aspects are plant-specific. Studies of DNA replication control in plants, which have unique developmental strategies, can offer unparalleled opportunities of comparing regulatory processes with yeast and, particularly, metazoa to identify common trends and basic rules. In addition to the comparative molecular and biochemical studies, genomic studies in plants that started with Arabidopsis thaliana in the year 2000 have now expanded to several dozens of species. This, together with the applicability of genomic approaches and the availability of a large collection of mutants, underscores the enormous potential to study DNA replication control in a whole developing organism. Recent advances in this field with particular focus on the DNA replication proteins, the nature of replication origins and their epigenetic landscape, and the control of endoreplication will be reviewed. PMID:23209151

  18. Fresh Wounds: Metadata and Usability Lessons from building the Earthdata Search Client

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Murphy, K. J.; Baynes, K.

    2014-12-01

    Data discovery and accessibility are frequent topics in science conferences but are usually discussed in an abstract XML schema kind-of way. In the course of designing and building the NASA Earthdata Search Client, a "concept-car" discovery client for the new Common Metadata Repository (CMR) and NASA Earthdata, we learned important lessons about usability from user studies and our actual use of science metadata. In this talk we challenge the community with the issues we ran into: the critical usability stumbling blocks for even seasoned researchers, "bug reports" from users that were ultimately usability problems in metadata, the challenges and questions that arise from incorporating "visual metadata", and the state of data access services. We intend to show that high quality metadata and real human usability factors are essential to making critical data accessible.

  19. Planck 2015 results. XXVIII. The Planck Catalogue of Galactic cold clumps

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; McGehee, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Pelkonen, V.-M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-08-01

    We present the Planck Catalogue of Galactic Cold Clumps (PGCC), an all-sky catalogue of Galactic cold clump candidates detected by Planck. This catalogue is the full version of the Early Cold Core (ECC) catalogue, which was made available in 2011 with the Early Release Compact Source Catalogue (ERCSC) and which contained 915 high signal-to-noise sources. It is based on the Planck 48-month mission data that are currently being released to the astronomical community. The PGCC catalogue is an observational catalogue consisting exclusively of Galactic cold sources. The three highest Planck bands (857, 454, and 353 GHz) have been combined with IRAS data at 3 THz to perform a multi-frequency detection of sources colder than their local environment. After rejection of possible extragalactic contaminants, the PGCC catalogue contains 13188 Galactic sources spread across the whole sky, i.e., from the Galactic plane to high latitudes, following the spatial distribution of the main molecular cloud complexes. The median temperature of PGCC sources lies between 13 and 14.5 K, depending on the quality of the flux density measurements, with a temperature ranging from 5.8 to 20 K after removing the sources with the top 1% highest temperature estimates. Using seven independent methods, reliable distance estimates have been obtained for 5574 sources, which allows us to derive their physical properties such as their mass, physical size, mean density, and luminosity.The PGCC sources are located mainly in the solar neighbourhood, but also up to a distance of 10.5 kpc in the direction of the Galactic centre, and range from low-mass cores to large molecular clouds. Because of this diversity and because the PGCC catalogue contains sources in very different environments, the catalogue is useful for investigating the evolution from molecular clouds to cores. Finally, it also includes 54 additional sources located in the Small and Large Magellanic Clouds.

  20. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  1. A Semantically Enabled Metadata Repository for Solar Irradiance Data Products

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.

    2014-12-01

    The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in

  2. VizieR Online Data Catalog: UltraVISTA Catalogue Release DR1 (McCracken+, 2012)

    NASA Astrophysics Data System (ADS)

    McCracken, H. J.; Milvang-Jensen, B.; Dunlop, J.; Franx, M.; Fynbo, J. P. U.; Le Fevre, O.; Holt, J.; Caputi, K. I.; Goranova, Y.; Buitrago, F.; Emerson, J. P.; Freudling, W.; Hudelot, P.; Lopez-Sanjuan, C.; Magnard, F.; Mellier, Y.; Moller, P.; Nilsson, K. K.; Sutherland, W.; Tasca, L.; Zabl, J.

    2012-08-01

    Matched source catalogue prepared for the first UltraVISTA (DR1 catalogue release. To 5σ limit, our Ks -selected catalogue contains 216268 sources observed in Y, J, H and Ks bands over the full UltraVISTA deep area (~1.3deg2), with NB118 observation over the "ultra-deep stripes" area. (2 data files).

  3. Metadata-driven Ad Hoc Query of Patient Data

    PubMed Central

    Deshpande, Aniruddha M.; Brandt, Cynthia; Nadkarni, Prakash M.

    2002-01-01

    Clinical study data management systems (CSDMSs) have many similarities to clinical patient record systems (CPRSs) in their focus on recording clinical parameters. Requirements for ad hoc query interfaces for both systems would therefore appear to be highly similar. However, a clinical study is concerned primarily with collective responses of groups of subjects to standardized therapeutic interventions for the same underlying clinical condition. The parameters that are recorded in CSDMSs tend to be more diverse than those required for patient management in non-research settings, because of the greater emphasis on questionnaires for which responses to each question are recorded separately. The differences between CSDMSs and CPRSs are reflected in the metadata that support the respective systems' operation, and need to be reflected in the query interfaces. The authors describe major revisions of their previously described CSDMS ad hoc query interface to meet CSDMS needs more fully, as well as its porting to a Web-based platform. PMID:12087118

  4. Metadata behind the Interoperability of Wireless Sensor Networks.

    PubMed

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability.

  5. Metadata behind the Interoperability of Wireless Sensor Networks

    PubMed Central

    Ballari, Daniela; Wachowicz, Monica; Callejo, Miguel Angel Manso

    2009-01-01

    Wireless Sensor Networks (WSNs) produce changes of status that are frequent, dynamic and unpredictable, and cannot be represented using a linear cause-effect approach. Consequently, a new approach is needed to handle these changes in order to support dynamic interoperability. Our approach is to introduce the notion of context as an explicit representation of changes of a WSN status inferred from metadata elements, which in turn, leads towards a decision-making process about how to maintain dynamic interoperability. This paper describes the developed context model to represent and reason over different WSN status based on four types of contexts, which have been identified as sensing, node, network and organisational contexts. The reasoning has been addressed by developing contextualising and bridges rules. As a result, we were able to demonstrate how contextualising rules have been used to reason on changes of WSN status as a first step towards maintaining dynamic interoperability. PMID:22412330

  6. An integrated content and metadata based retrieval system for art.

    PubMed

    Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James

    2004-03-01

    A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.

  7. Arctic Data Explorer: A Rich Solr Powered Metadata Search Portal

    NASA Astrophysics Data System (ADS)

    Liu, M.; Truslove, I.; Yarmey, L.; Lopez, L.; Reed, S. A.; Brandt, M.

    2013-12-01

    The Advanced Cooperative Arctic Data and Information Service (ACADIS) manages data and is the gateway for all relevant Arctic physical, life, and social science data for the Arctic Sciences (ARC) research community. Arctic Data Explorer (ADE), developed by the National Snow and Ice Data Center (NSIDC) under the ACADIS umbrella, is a data portal that provides users the ability to search across multiple Arctic data catalogs rapidly and precisely. In order to help the users quickly find the data they are interested in, we provided a simple search interface -- a search box with spatial and temporal options. The core of the interface is a ';google-like' single search box with logic to handle complex queries behind the scenes. ACADIS collects all metadata through the GI-Cat metadata broker service and indexes it in Solr. The single search box is implemented as a text based search utilizing the powerful tools provided by Solr. In this poster, we briefly explain Solr's indexing and searching capabilities. Several examples are presented to illustrate the rich search functionality the simple search box supports. Then we dive into the implementation details such as how phrase query, wildcard query, range query, fuzzy query and special query search term handling was integrated into ADE search. To provide our users the most relevant answers to their queries as quickly as possible, we worked with the Advisory Committee and the expanding Arctic User Community (scientists and data experts) to collect feedback to improve the search results and adjust the relevance/ranking logic to return more precise search results. The poster has specific examples on how we tuned the relevance ranking to achieve higher quality search results. A feature in the plan is to provide data sets recommendations based on user's current search history. Both collaborative filtering and content-based approaches were considered and researched. A feasible solution is proposed based on the content-based approach.

  8. Provenance Tracking for Earth Science Data and Its Metadata Representation

    NASA Astrophysics Data System (ADS)

    Barkstrom, B. R.

    2007-12-01

    In many cases Earth science data production involves long and complex chains of processes that accept files as input and create new files. It can be demonstrated that these chains form a mathematical graph in which the files and processes are the vertices and the relations between the files and vertices form the edges. There are four types of edges, which can be represented by the relations "this file was produced by that process," "this file is used by those processes," "this process needs these files for input," and "this process produces those files as output." Because Earth science data production often involves using previous data for statistical quality control, provenance graphs can be very large. For example, if previous data are used to develop statistics of clear sky radiances, a particular file may depend on statistics collected on many months of data. For EOS data or for the upcoming NPP and NPOESS missions, the number of files being ingested per day can be in the range of 10,000 to 100,000. As a result, the number of vertices and links can easily be in the millions to hundreds of millions of objects. This suggests that routine inclusion of complete provenance graphs in single files may be prohibitively voluminous, although the increasingly stringent requirements for provenance tracking require maintenance of the information from which the graph can be reliably constructed. The fact that provenance tracking requires traversal of the vertices and edges of the graph makes it difficult to uniquely fit into eXtensible Markup Language (XML). It also makes the construction of the graph difficult to do in standard Structured Query Language (SQL) because the tables for representing the graph require recursive queries. Both of these difficulties require care in constructing the data structures and the software that stores and makes the metadata accessible. This paper will then discuss the representation of this structure in metadata, including the possibilities

  9. VizieR Online Data Catalog: Tycho Input Catalogue, Revised version (Egret+ 1992)

    NASA Astrophysics Data System (ADS)

    Egret, D.; Didelon, P.; McLean, B. J.; Russell, J. L.; Turon, C.

    1994-11-01

    A Tycho Input Catalogue of three million stars brighter than V=12.1 has been prepared, for the needs of the Tycho mission (Hipparcos satellite). This catalogue results from the cross-matching of a subset of the Hubble Space Telescope Guide Star Catalog with the Hipparcos INCA database. References to these major catalogues, and details about the cross-matching procedures are to be found in the paper published in Astron. Astrophys. 258, 217-222 (May 1992). Among the 3,154,204 stars of the Tycho Input Catalogue, only a bit more than 1 million will appear in the final Tycho catalogue. A preliminary selection was done in the Recognition process, that was based on the first year of the satellite scientific mission (Halbwachs et al., =1994A&A...281L..25H). 1,049,971 stars were thus selected, and are flagged in this version of the Tycho Input Catalogue. The main file contains 3 154 204 records of 80 characters (total size: 256 Mbytes). It is split into four files tic1 to tic4 for easier manipulations. An annex file contains the following additional data for a subset of the stars: (a) the cross-identification with the Hipparcos Input Catalogue (117 778 records, flag 26) (b) the cross-matching with the INCA database (217 625 records, flag 20). The annex file contains 217 625 records (64 char., 14 Mbytes). (5 data files).

  10. Overview and stellar statistics of the expected Gaia Catalogue using the Gaia Object Generator

    NASA Astrophysics Data System (ADS)

    Luri, X.; Palmer, M.; Arenou, F.; Masana, E.; de Bruijne, J.; Antiche, E.; Babusiaux, C.; Borrachero, R.; Sartoretti, P.; Julbe, F.; Isasi, Y.; Martinez, O.; Robin, A. C.; Reylé, C.; Jordi, C.; Carrasco, J. M.

    2014-06-01

    Aims: An effort has been made to simulate the expected Gaia Catalogue, including the effect of observational errors. We statistically analyse this simulated Gaia data to better understand what can be obtained from the Gaia astrometric mission. This catalogue is used to investigate the potential yield in astrometric, photometric, and spectroscopic information and the extent and effect of observational errors on the true Gaia Catalogue. This article is a follow-up to previous work, where the expected Gaia Catalogue content was reviewed but without the simulation of observational errors. Methods: We analysed the Gaia Object Generator (GOG) catalogue using the Gaia Analysis Tool (GAT), thereby producing a number of statistics about the catalogue. Results: A simulated catalogue of one billion objects is presented, with detailed information on the 523 million individual single stars it contains. Detailed information is provided for the expected errors in parallax, position, proper motion, radial velocity, and the photometry in the four Gaia bands. Information is also given on the expected performance of physical parameter determination, including temperature, metallicity, and line-of-sight extinction.

  11. The MCXC: a meta-catalogue of x-ray detected clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Piffaretti, R.; Arnaud, M.; Pratt, G. W.; Pointecouteau, E.; Melin, J.-B.

    2011-10-01

    We present the compilation and properties of a meta-catalogue of X-ray detected clusters of galaxies, the MCXC. This very large catalogue is based on publicly available ROSAT All Sky Survey-based (NORAS, REFLEX, BCS, SGP, NEP, MACS, and CIZA) and serendipitous (160SD, 400SD, SHARC, WARPS, and EMSS) cluster catalogues. Data have been systematically homogenised to an overdensity of 500, and duplicate entries from overlaps between the survey areas of the individual input catalogues are carefully handled. The MCXC comprises 1743 clusters with virtually no duplicate entries. For each cluster the MCXC provides three identifiers, a redshift, coordinates, membership in the original catalogue, and standardised 0.1-2.4 keV band luminosity L500, total mass M500, and radius R500. The meta-catalogue additionally furnishes information on overlaps between the input catalogues and the luminosity ratios when measurements from different surveys are available, and gives notes on individual objects. The MCXC is available in electronic format for maximum usefulness in X-ray, SZ, and other multiwavelength studies. The catalog is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/534/A109

  12. Dynamic Steering for Improved Sensor Autonomy and Catalogue Maintenance

    NASA Astrophysics Data System (ADS)

    Hobson, T.; Gordon, N.; Clarkson, I.; Rutten, M.; Bessell, T.

    A number of international agencies endeavour to maintain catalogues of the man-made resident space objects (RSOs) currently orbiting the Earth. Such catalogues are primarily created to anticipate and avoid destructive collisions involving important space assets such as manned missions and active satellites. An agencys ability to achieve this objective is dependent on the accuracy, reliability and timeliness of the information used to update its catalogue. A primary means for gathering this information is by regularly making direct observations of the tens-of-thousands of currently detectable RSOs via networks of space surveillance sensors. But operational constraints sometimes prevent accurate and timely reacquisition of all known RSOs, which can cause them to become lost to the tracking system. Furthermore, when comprehensive acquisition of new objects does not occur, these objects, in addition to the lost RSOs, result in uncorrelated detections when next observed. Due to the rising number of space-missions and the introduction of newer, more capable space-sensors, the number of uncorrelated targets is at an all-time high. The process of differentiating uncorrelated detections caused by once-acquired now-lost RSOs from newly detected RSOs is a difficult and often labour intensive task. Current methods for overcoming this challenge focus on advancements in orbit propagation and object characterisation to improve prediction accuracy and target identification. In this paper, we describe a complementary approach that incorporates increased awareness of error and failed observations into the RSO tracking solution. Our methodology employs a technique called dynamic steering to improve the autonomy and capability of a space surveillance networks steerable sensors. By co-situating each sensor with a low-cost high-performance computer, the steerable sensor can quickly and intelligently decide how to steer itself. The sensor-system uses a dedicated parallel

  13. The key to enduring access: Cross-complex Metadata collaboration. Revision 1

    SciTech Connect

    Lownsbery, B.; Newton, H.; Ringe, A.

    1996-08-01

    The Nuclear Weapons Information Group (NWIG) is a voluntary collaborative effort of government organizations involved in nuclear weapons research, development, production, and testing. Standardized metadata is seen as critical to the locating, accessing, and effective use of the data, information, and knowledge of both past and future weapons activities. This paper will describe the activities of the NWIG Metadata Working Group in developing the metadata elements and authorities which will be used to share information about data stored in computers and vaults across the complex. With the current lack of secure network connectivity, it is impossible to have distributed access. Therefore we have focused on standardizing the form and content of shared metadata. We have adopted a SGML-based neutral exchange form that is completely independent of how the metadata is created and how it will be used. Our efforts have included the definition of a set of metadata elements that can be applied to all data types and additional attributes specific to each data type, such as documents, drawings, radiographs, photos, movies, etc. We have developed a common subject categorization taxonomy and identified several subsets of a standard glossary and thesaurus for inclusion in the metadata to provide consistency of terminology and the capability to link back to the full thesaurus. 2 refs., 2 figs., 2 tabs.

  14. Foundations of a metadata repository for databases of registers and trials.

    PubMed

    Stausberg, Jürgen; Löbe, Matthias; Verplancke, Philippe; Drepper, Johannes; Herre, Heinrich; Löffler, Markus

    2009-01-01

    The planning of case report forms (CRFs) in clinical trials or databases in registers is mostly an informal process starting from scratch involving domain experts, biometricians, and documentation specialists. The Telematikplattform für Medizinische Forschungsnetze, an umbrella organization for medical research in Germany, aims at supporting and improving this process with a metadata repository, covering the variables and value lists used in databases of registers and trials. The use cases for the metadata repository range from a specification of case report forms to the harmonization of variable collections, variables, and value lists through a formal review. The warehouse used for the storage of the metadata should at least fulfill the definition of part 3 "Registry metamodel and basic attributes" of ISO/IEC 11179 Information technology - Metadata registries. An implementation of the metadata repository should offer an import and export of metadata in the Operational Data Model standard of the Clinical Data Interchange Standards Consortium. It will facilitate the creation of CRFs and data models, improve the quality of CRFs and data models, support the harmonization of variables and value lists, and support the mapping of metadata and data. PMID:19745342

  15. The Planck Catalogue of High-z source candidates

    NASA Astrophysics Data System (ADS)

    Montier, Ludovic

    2015-08-01

    The Planck satellite has provided the first FIR/submm all-sky survey with a sensitivity allowing us to identify the rarest, most luminous high-z dusty star-forming sources on the sky. It opens a new window on these extreme star-forming systems at redshift above 1.5, providing a powerful laboratory to study the mechanisms of galaxy evolution and enrichment in the frame of the large scale structure growth.I will describe how the Planck catalogue of high-z source candidates (PHz, Planck 2015 in prep.) has been built and charcaterized over 25% of the sky by selecting the brightest red submm sources at a 5' resolution. Follow-up observations with Herschel/SPIRE over 228 Planck candidates have shown that 93% of these candidates are actually overdensities of red sources with SEDs peaking at 350um (Planck Int. results. XXVII 2014). Complementarily to this population of objects, 12 Planck high-z candidates have been identified as strongly lensed star forming galaxies at redshift lying between 2.2 and 3.6 (Canameras et al 2015 subm.), with flux densities larger than 400 mJy up to 1 Jy at 350um, and strong magnification factors. These Planck lensed star-forming galaxies are the rarest brightest lensed in the submm range, providing a unique opportunity to extend the exploration of the star-forming system in this range of mass and redshift.I will detail further a specific analysis performed on a proto-cluster candidate, PHz G95.5-61.6, identified as a double structure at z=1.7 and z=2.03, using an extensive follow-up program (Flores-Cacho et al 2015 subm.). This is the first Planck proto-cluster candidate with spectroscopic confirmation, which opens a new field of statistical analysis about the evolution of dusty star-forming galaxies in such accreting structures.I will finally discuss how the PHz catalogue may help to answer some of the fundamental questions like: At what cosmic epoch did massive galaxy clusters form most of their stars? Is star formation more or less vigorous

  16. Value of Hipparcos Catalogue shown by planet assessments

    NASA Astrophysics Data System (ADS)

    1996-08-01

    , or deuterium. Even the "worst-case" mass quoted here for the companion of 47 Ursae Majoris, 22 Jupiter masses, is only a maximum, not a measurement. So the companion is almost certainly a true planet with less than 17 times the mass of Jupiter. For the star 70 Virginis, the distance newly established by Hipparcos is 59 light-years. Even on the least favourable assumptions about its orbit, the companion cannot have more than 65 Jupiter masses. It could be brown dwarf rather than a planet, but not a true star. Much more ambiguous is the result for 51 Pegasi. Its distance is 50 light-years and theoretically the companion could have more than 500 Jupiter masses, or half the mass of the Sun. This is a peculiar case anyway, because the companion is very close to 51 Pegasi. Small planets of the size of the Earth might be more promising as abodes of life than the large planets detectable by present astronomical methods. Space scientists are now reviewing methods of detecting the presence of life on alien planets by detecting the infrared signature of ozone in a planet's atmosphere. Ozone is a by-product of oxygen gas, which in turn is supposed to be generated only by life similar to that on the Earth. Meanwhile the detection of planets of whatever size is a tour de force for astronomers, and by analogy with the Solar System one may suppose that large planets are often likely to be accompanied by smaller ones. "Hipparcos was not conceived to look for planets," comments Michael Perryman, ESA's project scientist for Hipparcos, "and this example of assistance to our fellow-astronomers involves a very small sample of our measurements. But it is a timely result when we are considering planet-hunting missions for the 21st Century. The possibilities include a super-Hipparcos that could detect directly the wobbles in nearby stars due to the presence of planets." Hipparcos Catalogue ready for use The result from Hipparcos on alien planets coincides with the completion of the Hipparcos

  17. Catalogue of Geadephaga (Coleoptera, Adephaga) of America, north of Mexico

    PubMed Central

    Bousquet, Yves

    2012-01-01

    Abstract All scientific names of Trachypachidae, Rhysodidae, and Carabidae (including cicindelines) recorded from America north of Mexico are catalogued. Available species-group names are listed in their original combinations with the author(s), year of publication, page citation, type locality, location of the name-bearing type, and etymology for many patronymic names. In addition, the reference in which a given species-group name is first synonymized is recorded for invalid taxa. Genus-group names are listed with the author(s), year of publication, page citation, type species with way of fixation, and etymology for most. The reference in which a given genus-group name is first synonymized is recorded for many invalid taxa. Family-group names are listed with the author(s), year of publication, page citation, and type genus. The geographical distribution of all species-group taxa is briefly summarized and their state and province records are indicated. One new genus-group taxon, Randallius new subgenus (type species: Chlaenius purpuricollis Randall, 1838), one new replacement name, Pterostichus amadeus new name for Pterostichus vexatus Bousquet, 1985, and three changes in precedence, Ellipsoptera rubicunda (Harris, 1911) for Ellipsoptera marutha (Dow, 1911), Badister micans LeConte, 1844 for Badister ocularis Casey, 1920, and Agonum deplanatum Ménétriés, 1843 for Agonum fallianum (Leng, 1919), are proposed. Five new genus-group synonymies and 65 new species-group synonymies, one new species-group status, and 12 new combinations (see Appendix 5) are established. The work also includes a discussion of the notable private North American carabid collections, a synopsis of all extant world geadephagan tribes and subfamilies, a brief faunistic assessment of the fauna, a list of valid species-group taxa, a list of North American fossil Geadephaga (Appendix 1), a list of North American Geadephaga larvae described or illustrated (Appendix 2), a list of Geadephaga species

  18. The Gaia-ESO Survey: Catalogue of Hα emission stars

    NASA Astrophysics Data System (ADS)

    Traven, G.; Zwitter, T.; Van Eck, S.; Klutsch, A.; Bonito, R.; Lanzafame, A. C.; Alfaro, E. J.; Bayo, A.; Bragaglia, A.; Costado, M. T.; Damiani, F.; Flaccomio, E.; Frasca, A.; Hourihane, A.; Jimenez-Esteban, F.; Lardo, C.; Morbidelli, L.; Pancino, E.; Prisinzano, L.; Sacco, G. G.; Worley, C. C.

    2015-09-01

    We discuss the properties of Hα emission stars across the sample of 22035 spectra from the Gaia-ESO Survey internal data release, observed with the GIRAFFE instrument and largely belonging to stars in young open clusters. Automated fits using two independent Gaussian profiles and a third component that accounts for the nebular emission allow us to discern distinct morphological types of Hα line profiles with the introduction of a simplified classification scheme. All in all, we find 3765 stars with intrinsic emission and sort their spectra into eight distinct morphological categories: single-component emission, emission blend, sharp emission peaks, double emission, P-Cygni, inverted P-Cygni, self-absorption, and emission in absorption. We have more than one observation for 1430 stars in our sample, thus allowing a quantitative discussion of the degree of variability of Hα emission profiles, which is expected for young, active objects. We present a catalogue of stars with properties of their Hα emission line profiles, morphological classification, analysis of variability with time and the supplementary information from the SIMBAD, VizieR, and ADS databases. The records in SIMBAD indicate the presence of Hα emission for roughly 25% of all stars in our catalogue, while at least 305 of them have already been more thoroughly investigated according to the references in ADS. The most frequently identified morphological categories in our sample of spectra are emission blend (23%), emission in absorption (22%), and self-absorption (16%). Objects with repeated observations demonstrate that our classification into discrete categories is generally stable through time, but categories P-Cygni and self-absorption seem less stable, which is the consequence of discrete classification rules, as well as of the fundamental change in profile shape. Such records of emission stars can be valuable for automatic pipelines in large surveys, where it may prove very useful for

  19. DNA recombination: the replication connection.

    PubMed

    Haber, J E

    1999-07-01

    Chromosomal double-strand breaks (DSBs) arise after exposure to ionizing radiation or enzymatic cleavage, but especially during the process of DNA replication itself. Homologous recombination plays a critical role in repair of such DSBs. There has been significant progress in our understanding of two processes that occur in DSB repair: gene conversion and recombination-dependent DNA replication. Recent evidence suggests that gene conversion and break-induced replication are related processes that both begin with the establishment of a replication fork in which both leading- and lagging-strand synthesis occur. There has also been much progress in characterization of the biochemical roles of recombination proteins that are highly conserved from yeast to humans.

  20. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  1. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies. PMID:20844917

  2. Overexpression of the Replicative Helicase in Escherichia coli Inhibits Replication Initiation and Replication Fork Reloading.

    PubMed

    Brüning, Jan-Gert; Myka, Kamila Katarzyna; McGlynn, Peter

    2016-03-27

    Replicative helicases play central roles in chromosome duplication and their assembly onto DNA is regulated via initiators and helicase loader proteins. The Escherichia coli replicative helicase DnaB and the helicase loader DnaC form a DnaB6-DnaC6 complex that is required for loading DnaB onto single-stranded DNA. Overexpression of dnaC inhibits replication by promoting continual rebinding of DnaC to DnaB and consequent prevention of helicase translocation. Here we show that overexpression of dnaB also inhibits growth and chromosome duplication. This inhibition is countered by co-overexpression of wild-type DnaC but not of a DnaC mutant that cannot interact with DnaB, indicating that a reduction in DnaB6-DnaC6 concentration is responsible for the phenotypes associated with elevated DnaB concentration. Partial defects in the oriC-specific initiator DnaA and in PriA-specific initiation away from oriC during replication repair sensitise cells to dnaB overexpression. Absence of the accessory replicative helicase Rep, resulting in increased replication blockage and thus increased reinitiation away from oriC, also exacerbates DnaB-induced defects. These findings indicate that elevated levels of helicase perturb replication initiation not only at origins of replication but also during fork repair at other sites on the chromosome. Thus, imbalances in levels of the replicative helicase and helicase loader can inhibit replication both via inhibition of DnaB6-DnaC6 complex formation with excess DnaB, as shown here, and promotion of formation of DnaB6-DnaC6 complexes with excess DnaC [Allen GC, Jr., Kornberg A. Fine balance in the regulation of DnaB helicase by DnaC protein in replication in Escherichia coli. J. Biol. Chem. 1991;266:22096-22101; Skarstad K, Wold S. The speed of the Escherichia coli fork in vivo depends on the DnaB:DnaC ratio. Mol. Microbiol. 1995;17:825-831]. Thus, there are two mechanisms by which an imbalance in the replicative helicase and its associated

  3. Overexpression of the Replicative Helicase in Escherichia coli Inhibits Replication Initiation and Replication Fork Reloading

    PubMed Central

    Brüning, Jan-Gert; Myka, Kamila Katarzyna; McGlynn, Peter

    2016-01-01

    Replicative helicases play central roles in chromosome duplication and their assembly onto DNA is regulated via initiators and helicase loader proteins. The Escherichia coli replicative helicase DnaB and the helicase loader DnaC form a DnaB6–DnaC6 complex that is required for loading DnaB onto single-stranded DNA. Overexpression of dnaC inhibits replication by promoting continual rebinding of DnaC to DnaB and consequent prevention of helicase translocation. Here we show that overexpression of dnaB also inhibits growth and chromosome duplication. This inhibition is countered by co-overexpression of wild-type DnaC but not of a DnaC mutant that cannot interact with DnaB, indicating that a reduction in DnaB6–DnaC6 concentration is responsible for the phenotypes associated with elevated DnaB concentration. Partial defects in the oriC-specific initiator DnaA and in PriA-specific initiation away from oriC during replication repair sensitise cells to dnaB overexpression. Absence of the accessory replicative helicase Rep, resulting in increased replication blockage and thus increased reinitiation away from oriC, also exacerbates DnaB-induced defects. These findings indicate that elevated levels of helicase perturb replication initiation not only at origins of replication but also during fork repair at other sites on the chromosome. Thus, imbalances in levels of the replicative helicase and helicase loader can inhibit replication both via inhibition of DnaB6–DnaC6 complex formation with excess DnaB, as shown here, and promotion of formation of DnaB6–DnaC6 complexes with excess DnaC [Allen GC, Jr., Kornberg A. Fine balance in the regulation of DnaB helicase by DnaC protein in replication in Escherichia coli. J. Biol. Chem. 1991;266:22096–22101; Skarstad K, Wold S. The speed of the Escherichia coli fork in vivo depends on the DnaB:DnaC ratio. Mol. Microbiol. 1995;17:825–831]. Thus, there are two mechanisms by which an imbalance in the replicative helicase and its

  4. The Biology of Replicative Senescence

    SciTech Connect

    Campisi, J.

    1996-12-04

    Most cells cannot divide indefinitely due to a processtermed cellular or replicative senescence. Replicative senescence appearsto be a fundamental feature of somatic cells, with the exception of mosttumour cells and possibly certain stem cells. How do cells sense thenumber of divisions they have completed? Although it has not yet beencritically tested, the telomere shortening hypothesis is currentlyperhaps the best explanation for a cell division 'counting' mechanism.Why do cells irreversibly cease proliferation after completing a finitenumber of divisions? It is now known that replicative senescence altersthe expression of a few crucial growth-regulatory genes. It is not knownhow these changes in growth-regulatory gene expression are related totelomere shortening in higher eukaryotes. However, lower eukaryotes haveprovided several plausible mechanisms. Finally, what are thephysiological consequences of replicative senescence? Several lines ofevidence suggest that, at least in human cells, replicative senescence isa powerful tumour suppressive mechanism. There is also indirect evidencethat replicative senescence contributes to ageing. Taken together,current findings suggest that, at least in mammals, replicativesenescence may have evolved to curtail tumorigenesis, but may also havethe unselected effect of contributing to age-related pathologies,including cancer.

  5. The ANSS Station Information System: A Centralized Station Metadata Repository for Populating, Managing and Distributing Seismic Station Metadata

    NASA Astrophysics Data System (ADS)

    Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.

    2015-12-01

    Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.

  6. Metadata registry and management system based on ISO 11179 for cancer clinical trials information system

    PubMed Central

    Park, Yu Rang; Kim*, Ju Han

    2006-01-01

    Standardized management of data elements (DEs) for Case Report Form (CRF) is crucial in Clinical Trials Information System (CTIS). Traditional CTISs utilize organization-specific definitions and storage methods for Des and CRFs. We developed metadata-based DE management system for clinical trials, Clinical and Histopathological Metadata Registry (CHMR), using international standard for metadata registry (ISO 11179) for the management of cancer clinical trials information. CHMR was evaluated in cancer clinical trials with 1625 DEs extracted from the College of American Pathologists Cancer Protocols for 20 major cancers. PMID:17238675

  7. Central stars of planetary nebulae: New spectral classifications and catalogue

    NASA Astrophysics Data System (ADS)

    Weidmann, W. A.; Gamen, R.

    2011-02-01

    Context. There are more than 3000 confirmed and probable known Galactic planetary nebulae (PNe), but central star spectroscopic information is available for only 13% of them. Aims: We undertook a spectroscopic survey of central stars of PNe at low resolution and compiled a large list of central stars for which information was dispersed in the literature. Methods: We observed 45 PNs using the 2.15 m telescope at Casleo, Argentina. Results: We present a catalogue of 492 confirmed and probable CSPN and provide a preliminary spectral classification for 45 central star of PNe. This revises previous values of the proportion of CSPN with atmospheres poor in hydrogen in at least 30% of cases and provide statistical information that allows us to infer the origin of H-poor stars. Based on data collected at the Complejo Astronómico El Leoncito (CASLEO), which is operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina y Universidades Nacionales de La Plata, Córdoba y San Juan, Argentina.

  8. VizieR Online Data Catalog: The BMW-HRI source catalogue (Panzera+, 2003)

    NASA Astrophysics Data System (ADS)

    Panzera, M. R.; Campana, S.; Covino, S.; Lazzati, D.; Mignani, R. P.; Moretti, A.; Tagliaferri, G.

    2002-11-01

    The BMW-HRI catalogue is generated from US and German ROSAT HRI observations for which data have been released to the US ROSAT archive at GSFC and to the German ROSAT archive at MPE up to December 2001. A total number of 4,303 observations with exposure times longer than 100 s were analyzed automatically using a wavelet detection algorithm. The catalogue consists of 29,089 sources (detection probability greater or equal 4.2 sigma). For each source name, position, count rate, flux and extension along with the relative errors are given.The catalogue also reports results of cross-correlations with existing catalogues at different wavelengths (FIRST, GSC2, 2MASS, and IRAS). (1 data file).

  9. Linked Metadata - lightweight semantics for data integration (Invited)

    NASA Astrophysics Data System (ADS)

    Hendler, J. A.

    2013-12-01

    The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the

  10. VizieR Online Data Catalog: The ENACS Catalogue. V. (Katgert+ 1998)

    NASA Astrophysics Data System (ADS)

    Katgert, P.; Mazure, A.; den Hartog, R.; Adami, C.; Biviano, A.; Perea, J.

    1998-04-01

    Table enacs presents the full ENACS catalogue: i.e. redshifts and photometry of 5634 galaxies in the directions of 107 rich Southern cluster candidates from the ACO catalogue. Table 2 of this paper lists additional redshifts from the literature for 33 galaxies contained within the Optopus areas of 4 clusters observed in the ENACS. Table 5 of this paper lists the centre of the Optopus plates and the dates of the Optopus observations (3 data files).

  11. The visual magnitudes of stars in the Almagest of Ptolemeus and in later catalogues.

    NASA Astrophysics Data System (ADS)

    Schmidt, H.

    1994-09-01

    The visual magnitudes of the Almagest have been compared with modern photoelectric measurements in V. Later catalogues equally based on visual estimates have been included. The various catalogues correlate rather well. Systematic effects due to extinction and the colour of the stars have been investigated. In spite of the hopes of the early observers no stars with very slow but systematic brightness variations have been found.

  12. Identification of stars in a J1744.0 star catalogue Yixiangkaocheng

    NASA Astrophysics Data System (ADS)

    Ahn, S.-H.

    2012-05-01

    The stars in the Chinese star catalogue, Yixiangkaocheng, which were edited by the Jesuit astronomer Kögler in AD 1744 and published in AD 1756, are identified with their counterparts in the Hipparcos catalogue. The equinox of the catalogue is confirmed to be J1744.0. By considering the precession of equinox, proper motions and nutation, the star closest to the location of each star in Yixiangkaocheng, having a proper magnitude, is selected as the corresponding identified star. I identified 2848 stars and 13 nebulosities out of 3083 objects in Yixiangkaocheng, and so the identification rate reached 92.80 per cent. I find that the magnitude classification system in Yixiangkaocheng agrees with the modern magnitude system. The catalogue includes dim stars, whose visual magnitudes are larger than 7, but most of these stars have Flamsteed designations. I find that the stars whose declination is lower than -30° have relatively larger offsets and different systematic behaviour from other stars. This indicates that there might be two different sources of stars in Yixiangkaocheng. In particular, I find that μ1 Sco and γ1 Sgr approximately mark the boundary between two different source catalogues. The observer's location, as estimated from these facts, agrees with the latitude of Greenwich where Flamsteed made his observations. The positional offsets between the Yixiangkaocheng stars and the Hipparcos stars are 0.6 arcmin, which implies that the source catalogue of stars with δ > -30° must have come from telescopic observations. Nebulosities in Yixiangkaocheng are identified with a few double stars, o Cet (the variable star, Mira), the Andromeda galaxy, ω Cen and NGC6231. These entities are associated with listings in Halley's Catalogue of the Southern Stars of AD 1679 as well as Flamsteed's catalogue of AD 1690.

  13. Public release of the ISC-GEM Global Instrumental Earthquake Catalogue (1900-2009)

    USGS Publications Warehouse

    Storchak, Dmitry A.; Di Giacomo, Domenico; Bondára, István; Engdahl, E. Robert; Harris, James; Lee, William H.K.; Villaseñor, Antonio; Bormann, Peter

    2013-01-01

    The International Seismological Centre–Global Earthquake Model (ISC–GEM) Global Instrumental Earthquake Catalogue (1900–2009) is the result of a special effort to substantially extend and improve currently existing global catalogs to serve the requirements of specific user groups who assess and model seismic hazard and risk. The data from the ISC–GEM Catalogue would be used worldwide yet will prove absolutely essential in those regions where a high seismicity level strongly correlates with a high population density.

  14. VizieR Online Data Catalog: Astrometric catalogue of stars KMAC2 (Lazorenko+, 2015)

    NASA Astrophysics Data System (ADS)

    Lazorenko, P.; Karbovsky, V.; Buromsky, M.; Svachii, L.; Kasjan, S.

    2016-01-01

    The KMAC2 catalogue contain positions (J2000.0) and V magnitudes in equatorial sky area. Positions are given at mean epoch of observations. The limiting magnitude is V=17mag. The estimated accuracies are 50-70mag in positions and 0.05-0.08mag in photometry for V<15mag stars. We add also proper motions taken from the USNO-A2.0 catalogue. (1 data file).

  15. VizieR Online Data Catalog: WATCH Solar X-Ray Burst Catalogue (Crosby+ 1998)

    NASA Astrophysics Data System (ADS)

    Crosby, N.; Lund, N.; Vilmer, N.; Sunyaev, R.

    1998-01-01

    Catalogue containing solar X-ray bursts measured by the Danish Wide Angle Telescope for Cosmic Hard X-Rays (WATCH) experiment aboard the Russian satellite GRANAT in the deca-keV energy range. Table 1 lists the periods during which solar observations with WATCH are available (WATCH ON-TIME) and where the bursts listed in the catalogue have been observed. (2 data files).

  16. Preserving Geological Samples and Metadata from Polar Regions

    NASA Astrophysics Data System (ADS)

    Grunow, A.; Sjunneskog, C. M.

    2011-12-01

    The Office of Polar Programs at the National Science Foundation (NSF-OPP) has long recognized the value of preserving earth science collections due to the inherent logistical challenges and financial costs of collecting geological samples from Polar Regions. NSF-OPP established two national facilities to make Antarctic geological samples and drill cores openly and freely available for research. The Antarctic Marine Geology Research Facility (AMGRF) at Florida State University was established in 1963 and archives Antarctic marine sediment cores, dredge samples and smear slides along with ship logs. The United States Polar Rock Repository (USPRR) at Ohio State University was established in 2003 and archives polar rock samples, marine dredges, unconsolidated materials and terrestrial cores, along with associated materials such as field notes, maps, raw analytical data, paleomagnetic cores, thin sections, microfossil mounts, microslides and residues. The existence of the AMGRF and USPRR helps to minimize redundant sample collecting, lessen the environmental impact of doing polar field work, facilitates field logistics planning and complies with the data sharing requirement of the Antarctic Treaty. USPRR acquires collections through donations from institutions and scientists and then makes these samples available as no-cost loans for research, education and museum exhibits. The AMGRF acquires sediment cores from US based and international collaboration drilling projects in Antarctica. Destructive research techniques are allowed on the loaned samples and loan requests are accepted from any accredited scientific institution in the world. Currently, the USPRR has more than 22,000 cataloged rock samples available to scientists from around the world. All cataloged samples are relabeled with a USPRR number, weighed, photographed and measured for magnetic susceptibility. Many aspects of the sample metadata are included in the database, e.g. geographical location, sample

  17. Metadata-Driven SOA-Based Application for Facilitation of Real-Time Data Warehousing

    NASA Astrophysics Data System (ADS)

    Pintar, Damir; Vranić, Mihaela; Skočir, Zoran

    Service-oriented architecture (SOA) has already been widely recognized as an effective paradigm for achieving integration of diverse information systems. SOA-based applications can cross boundaries of platforms, operation systems and proprietary data standards, commonly through the usage of Web Services technology. On the other side, metadata is also commonly referred to as a potential integration tool given the fact that standardized metadata objects can provide useful information about specifics of unknown information systems with which one has interest in communicating with, using an approach commonly called "model-based integration". This paper presents the result of research regarding possible synergy between those two integration facilitators. This is accomplished with a vertical example of a metadata-driven SOA-based business process that provides ETL (Extraction, Transformation and Loading) and metadata services to a data warehousing system in need of a real-time ETL support.

  18. Unsuccessful attempt for dating the Ulugbeige's catalogue by the method of Dambis and Efremov

    NASA Astrophysics Data System (ADS)

    Nickiforov, Michael G.

    In the paper "Dating Ptolemy's star catalogue on the basis of proper motions: a thousand-year problem is solved" [1] A.K. Dambis' and Yu.N. Efremov applied so called ``bulk method'' in an attempt for dating the ``Almagest'' star catalogue. They obtained T=89±112 B.C. and concluded that this catalogue was created in Hipparchan epoch (about 130 B.C.), rejecting Ptolemy's (about 130 A.D.) authorship. In order to check the reliability of ``the bulk method'', we applied this method for dating the Ulugbeig's catalogue, selecting from it different samples of fast stars. The main variant of the check points out a time interval of the catalogue's completion between 1149div1275 A.D. Correspondence to the traditionally accepted epoch of this catalogue, about 1437 A.D., is only achieved upon exclusion the seven fastest stars. We explain the obtained significant discrepancy as a result of numerous assumptions in the method of Dambis and Efremov which lead to underestimating of the errors and to displacement the epoch of dating.

  19. The CORIMP CME Catalogue: Automatically Detecting and Tracking CMEs in Coronagraph Data

    NASA Astrophysics Data System (ADS)

    Byrne, Jason; Morgan, H.; Habbal, S. R.

    2012-05-01

    Studying CMEs in coronagraph data can be challenging due to their diffuse structure and transient nature, and user-specific biases may be introduced through visual inspection of the images. The large amount of data available from the SOHO and STEREO missions also makes manual cataloguing of CMEs tedious, and so a robust method of detection and analysis is required. This has led to the development of automated CME detection and cataloguing packages such as CACTus, SEEDS and ARTEMIS. Here we present the development of the CORIMP (coronal image processing) Catalogue: a new, automated, multiscale, CME detection and tracking catalogue, that overcomes many of the drawbacks of current catalogues. It works by first employing a dynamic CME separation technique to remove the static background, and then characterizing CME structure via a multiscale edge-detection algorithm. The detections are chained through time to determine the CME kinematics and morphological changes as it propagates across the plane-of-sky. The effectiveness of the method is demonstrated by its application to a selection of SOHO/LASCO and STEREO/SECCHI images, as well as to synthetic coronagraph images created from a model corona with a variety of CMEs. These algorithms are being applied to the whole LASCO and SECCHI datasets, and a CORIMP catalogue of results will soon be available to the community.

  20. An analysis of the first three catalogues of southern star clusters and nebulae

    NASA Astrophysics Data System (ADS)

    Cozens, Glendyn John

    2008-06-01

    "If men like [John] Herschel are to spend the best years of their lives in recording for the benefit of a remote posterity the actual state of the heavens - what a galling discovery to find amongst their own contemporaries men [James Dunlop] who -- from carelessness and culpable apathy hand down to posterity a mass of errors -- [so] that four hundred objects out of six hundred could not be identified in any manner -- with a telescope seven times more powerful than that stated to have been used!" The denigration of James Dunlop and his catalogue of 629 southern nebulae and clusters produced in 1826 originated with John Herschel and was continued by others of his day. Was this criticism justified? Was James Dunlop guilty of "carelessness and culpable apathy"? Were there "four hundred objects out of six hundred" which could not be identified, and if so, was there an explanation for this large shortfall? This question led to a search within Dunlop's 1826 catalogue to rediscover, if possible, some of the missing objects and to reinstate Dunlop, if justified, as a bona fide astronomer. In doing this, Dunlop's personal background, education and experience became relevant, as did a comparison with the catalogue of 42 southern nebulae and clusters produced by Nicolas-Louis de La Caille in 1751-2, and the 1834-8 catalogue of 1708 southern nebulae and clusters by John Herschel, who found the Dunlop catalogue so galling. To place the three southern catalogues in their historical context, a brief overview of these and the first three northern catalogues was made. Biographical information, descriptions of their equipment and comments on their observing techniques were included, where obtainable, for each of the authors of the three southern catalogues. However the main objective of this thesis was to determine which of the 629 objects in the Dunlop catalogue exist and then using these objects in a revised Dunlop catalogue, to statistically analyse and compare it with the content

  1. Comparison of Zero Zone Catalogues of the Fon Program Based on the Kyiv and Kitab Observations

    NASA Astrophysics Data System (ADS)

    Andruk, V. M.; Relke, H.; Protsyuk, Yu. I.; Muminov, M. M.; Ehgamberdiev, Sh. A.; Yuldoshev, Q. X.; Golovnia, V. V.

    The two new catalogues for the zero zone of the FON project were created after the processing of two different collections of digitized photographic plates. The photographic plates were received at the DAZ and DWA telescopes of the Kitab observatory of the Republic of Uzbekistan (KO UAS) and of the Main astronomical observatory in Kyiv (MAO NASU) in the number of 90 and 120 plates, respectively. The digitization of these photographic plates in the frame of the Ukrainian Virtual Observatory project was performed by means of the Epson Expression 10000XL scanner with the scanning resolution of 1200 dpi. The coordinates of stars and galaxies for the both catalogues are determined in the system of the Tycho2 catalogue. The stellar magnitudes of all objects are done in B-magnitudes of the photoelectric standard system. The difference between the calculated and the reference positions is equal σαδ = ±0.06-0.07". The internal accuracy of the both catalogues for all objects is σαδ = ±0.20", σB = ±0.18m and σ αδ = ±0.27", σ B = ±0.17m, respectively. We present the comparison of these both catalogues with each other and with the Tycho2, UCAC4 as well as PPMX catalogues and discuss the results.

  2. [Physiology in the mirror of systematic catalogue of Russian Academy of Sciences Library].

    PubMed

    Orlov, I V; Lazurkina, V B

    2011-07-01

    Representation of general human and animal physiology publications in the systematic catalogue of the Library of the Russian Academy of Sciences is considered. The organization of the catalogue as applied to the problems of physiology, built on the basis of library-bibliographic classification used in the Russian universal scientific libraries is described. The card files of the systematic catalogue of the Library contain about 8 million cards. Topics that reflect the problems of general physiology contain 39 headings. For the full range of sciences including physiology the tables of general types of divisions were developed. They have been marked by indexes using lower-case letters of the Russian alphabet. For further detalizations of these indexes decimal symbols are used. The indexes are attached directly to the field of knowledge index. With the current relatively easy availability of network resources value and relevance of any catalogue are reduced. However it concerns much more journal articles, rather than reference books, proceedings of various conferences, bibliographies, personalities, and especially the monographs contained in the systematic catalogue. The card systematic catalogue of the Library remains an important source of information on general physiology issues, as well as its magistral narrower sections.

  3. [Competency-based medical education: National Catalogue of Learning Objectives in surgery].

    PubMed

    Kadmon, M; Bender, M J; Adili, F; Arbab, D; Heinemann, M K; Hofmann, H S; König, S; Küper, M A; Obertacke, U; Rennekampff, H-O; Rolle, U; Rücker, M; Sader, R; Tingart, M; Tolksdorf, M M; Tronnier, V; Will, B; Walcher, F

    2013-04-01

    Competency-based medical education is a prerequisite to prepare students for the medical profession. A mandatory professional qualification framework is a milestone towards this aim. The National Competency-based Catalogue of Learning Objectives for Undergraduate Medical Education (NKLM) of the German Medical Faculty Association (MFT) and the German Medical Association will constitute a basis for a core curriculum of undergraduate medical training. The Surgical Working Group on Medical Education (CAL) of the German Association of Surgeons (DGCH) aims at formulating a competency-based catalogue of learning objectives for surgical undergraduate training to bridge the gap between the NKLM and the learning objectives of individual medical faculties. This is intended to enhance the prominence and visibility of the surgical discipline in the context of medical education. On the basis of different faculty catalogues of learning objectives, the catalogue of learning objectives of the German Association of Orthopedics and Orthopedic Surgery and the Swiss Catalogue of Learning Objectives representatives of all German Surgical Associations cooperated towards a structured selection process of learning objectives and the definition of levels and areas of competencies. After completion the catalogue of learning objectives will be available online on the webpage of the DGCH.

  4. Update of ECTOM - European catalogue of training opportunities in meteorology

    NASA Astrophysics Data System (ADS)

    Halenka, Tomas; Belda, Michal

    2016-04-01

    After Bologna Declaration (1999) the process of integration of education at university level was started in most European countries with the aim to unify the system and structure of the tertiary education with the option for possibility of transnational mobility of students across the Europe. The goal was to achieve the compatibility between the systems and levels in individual countries to help this mobility. To support the effort it is useful to provide the information about the possibility of education in different countries in centralised form, with uniform shape and content, but validated on a national level. For meteorology and climatology this could be reasonably done on the floor of European Meteorological Society, ideally with contribution of individual National Meteorological Societies and their guidance. Brief history of the original ECTOM I and previous attempts to start ECTOM II is given. Further need of update of the content is discussed with emphasis to several aspects. There are several reasons for such an update of ECTOM 1. First, there are many more new EMS members which could contribute to the catalogue. Second, corrected, new, more precise and expanding information will be available in addition to existing record, particularly in sense of some changes in education systems of EC countries and associated countries approaching the EC following the main goals included in Bologna Declaration. Third, contemporary technology to organize the real database with the possibility of easier navigation and searching of the appropriate information and feasibility to keep them up to date permanently through the WWW interface should be adopted. In this presentation, the engine of ECTOM II database will be shown together with practical information how to find and submit information on access to education or training possibilities. Finally, as we have started with filling the database using freely available information from the web, practical examples of use will

  5. A catalogue of potentially bright close binary gravitational wave sources

    NASA Technical Reports Server (NTRS)

    Webbink, Ronald F.

    1985-01-01

    This is a current print-out of results of a survey, undertaken in the spring of 1985, to identify those known binary stars which might produce significant gravitational wave amplitudes at earth, either dimensionless strain amplitudes exceeding a threshold h = 10(exp -21), or energy fluxes exceeding F = 10(exp -12) erg cm(exp -2) s(exp -1). All real or putative binaries brighter than a certain limiting magnitude (calculated as a function of primary spectral type, orbital period, orbital eccentricity, and bandpass) are included. All double degenerate binaries and Wolf-Rayet binaries with known or suspected orbital periods have also been included. The catalog consists of two parts: a listing of objects in ascending order of Right Ascension (Equinox B1950), followed by an index, listing of objects by identification number according to all major stellar catalogs. The object listing is a print-out of the spreadsheets on which the catalog is currently maintained. It should be noted that the use of this spreadsheet program imposes some limitations on the display of entries. Text entries which exceed the cell size may appear in truncated form, or may run into adjacent columns. Greek characters are not available; they are represented here by the first two or three letters of their Roman names, the first letter appearing as a capital or lower-case letter according to whether the capital or lower-case Greek character is represented. Neither superscripts nor subscripts are available; they appear here in normal position and type-face. The index provides the Right Ascension and Declination of objects sorted by catalogue number.

  6. Seismic databases and earthquake catalogue of the Caucasus

    NASA Astrophysics Data System (ADS)

    Godoladze, Tea; Javakhishvili, Zurab; Tvaradze, Nino; Tumanova, Nino; Jorjiashvili, Nato; Gok, Rengen

    2016-04-01

    The Caucasus has a documented historical catalog stretching back to the beginning of the Christian era. Most of the largest historical earthquakes prior to the 19th century are assumed to have occurred on active faults of the Greater Caucasus. Important earthquakes include the Samtskhe earthquake of 1283, Ms~7.0, Io=9; Lechkhumi-Svaneti earthquake of 1350, Ms~7.0, Io=9; and the Alaverdi(earthquake of 1742, Ms~6.8, Io=9. Two significant historical earthquakes that may have occurred within the Javakheti plateau in the Lesser Caucasus are the Tmogvi earthquake of 1088, Ms~6.5, Io=9 and the Akhalkalaki earthquake of 1899, Ms~6.3, Io =8-9. Large earthquakes that occurred in the Caucasus within the period of instrumental observation are: Gori 1920; Tabatskuri 1940; Chkhalta 1963; 1991 Ms=7.0 Racha earthquake, the largest event ever recorded in the region; the 1992 M=6.5 Barisakho earthquake; Ms=6.9 Spitak, Armenia earthquake (100 km south of Tbilisi), which killed over 50,000 people in Armenia. Recently, permanent broadband stations have been deployed across the region as part of various national networks (Georgia (~25 stations), Azerbaijan (~35 stations), Armenia (~14 stations)). The data from the last 10 years of observation provides an opportunity to perform modern, fundamental scientific investigations. A catalog of all instrumentally recorded earthquakes has been compiled by the IES (Institute of Earth Sciences, Ilia State University). The catalog consists of more then 80,000 events. Together with our colleagues from Armenia, Azerbaijan and Turkey the database for the Caucasus seismic events was compiled. We tried to improve locations of the events and calculate Moment magnitudes for the events more than magnitude 4 estimate in order to obtain unified magnitude catalogue of the region. The results will serve as the input for the Seismic hazard assessment for the region.

  7. Catalogue of far-infrared loops in the Galaxy

    NASA Astrophysics Data System (ADS)

    Könyves, V.; Kiss, Cs.; Moór, A.; Kiss, Z. T.; Tóth, L. V.

    2007-03-01

    Aims:An all-sky survey of loop- and arc-like intensity enhancements has been performed to investigate the large-scale structure of the diffuse far-infrared emission. Methods: We used maps made of 60 and 100 μm processed IRAS data (Sky Survey Atlas and dust infrared emission maps) to identify large-scale structures: loops, arcs, or cavities, in the far-infrared emission in the Galaxy. Distances were attributed to a subsample of loops using associated objects. Results: We identified 462 far-infrared loops, analysed their individual FIR properties and their distribution. This data forms the Catalogue of Far-Infrared Loops in the Galaxy. We obtained observational estimates of fin ≈ 30% and fout ≈ 5% for the hot gas volume filling factor of the inward and outward Galactic neighbourhood of the Solar System. We obtained a slope of the power-law size luminosity function β = 1.37 for low Galactic latitudes in the outer Milky Way. Conclusions: .Deviations in the celestial distribution of far-infrared loops clearly indicate that violent events frequently overwrite the structure of the interstellar matter in the inner Galaxy. Our objects trace out the spiral arm structure of the Galaxy in the neighbourhood of the Sun and their distribution clearly suggests that there is an efficient process that can generate loop-like features at high Galactic latitudes. Power-law indices of size luminosity distributions suggest that the structure of the ISM is ruled by supernovae and stellar winds at low Galactic latitudes, while it is governed by supersonic turbulence above the Galactic plane. Appendices B-D are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/463/1227

  8. The Updated IAU MDC Catalogue of Photographic Meteor Orbits

    NASA Technical Reports Server (NTRS)

    Porubcan, V.; Svoren, J.; Neslusan, L.; Schunova, E.

    2011-01-01

    The database of photographic meteor orbits of the IAU Meteor Data Center at the Astronomical Institute SAS has gradually been updated. To the 2003 version of 4581 photographic orbits compiled from 17 different stations and obtained in the period 1936-1996, additional new 211 orbits compiled from 7 sources have been added. Thus, the updated version of the catalogue contains 4792 photographic orbits (equinox J2000.0) available either in two separate orbital and geophysical data files or a file with the merged data. All the updated files with relevant documentation are available at the web of the IAU Meteor Data Center. Keywords astronomical databases photographic meteor orbits 1 Introduction Meteoroid orbits are a basic tool for investigation of distribution and spatial structure of the meteoroid population in the close surroundings of the Earth s orbit. However, information about them is usually widely scattered in literature and often in publications with limited circulation. Therefore, the IAU Comm. 22 during the 1976 IAU General Assembly proposed to establish a meteor data center for collection of meteor orbits recorded by photographic and radio techniques. The decision was confirmed by the next IAU GA in 1982 and the data center was established (Lindblad, 1987). The purpose of the data center was to acquire, format, check and disseminate information on precise meteoroid orbits obtained by multi-station techniques and the database gradually extended as documented in previous reports on the activity of the Meteor Data Center by Lindblad (1987, 1995, 1999 and 2001) or Lindblad and Steel (1993). Up to present, the database consists of 4581 photographic meteor orbits (Lindblad et al., 2005), 63.330 radar determined orbit: Harvard Meteor Project (1961-1965, 1968-1969), Adelaide (1960-1961, 1968-1969), Kharkov (1975), Obninsk (1967-1968), Mogadish (1969-1970) and 1425 video-recordings (Lindblad, 1999) to which additional 817 video meteors orbits published by Koten el

  9. Mimiviruses: Replication, Purification, and Quantification.

    PubMed

    Abrahão, Jônatas Santos; Oliveira, Graziele Pereira; Ferreira da Silva, Lorena Christine; Dos Santos Silva, Ludmila Karen; Kroon, Erna Geessien; La Scola, Bernard

    2016-01-01

    The aim of this protocol is to describe the replication, purification, and titration of mimiviruses. These viruses belong to the Mimiviridae family, the first member of which was isolated in 1992 from a cooling tower water sample collected during an outbreak of pneumonia in a hospital in Bradford, England. In recent years, several new mimiviruses have been isolated from different environmental conditions. These giant viruses are easily replicated in amoeba of the Acanthamoeba genus, its natural host. Mimiviruses present peculiar features that make them unique viruses, such as the particle and genome size and the genome's complexity. The discovery of these viruses rekindled discussions about their origin and evolution, and the genetic and structural complexity opened up a new field of study. Here, we describe some methods utilized for mimiviruses replication, purification, and titration. © 2016 by John Wiley & Sons, Inc. PMID:27153385

  10. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  11. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    NASA Astrophysics Data System (ADS)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  12. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  13. Replicating systems concepts: Self-replicating lunar factory and demonstration

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Automation of lunar mining and manufacturing facility maintenance and repair is addressed. Designing the factory as an automated, multiproduct, remotely controlled, reprogrammable Lunar Manufacturing Facility capable of constructing duplicates of itself which would themselves be capable of further replication is proposed.

  14. A Replication of Failure, Not a Failure to Replicate

    ERIC Educational Resources Information Center

    Holden, Gary; Barker, Kathleen; Kuppens, Sofie; Rosenberg, Gary; LeBreton, Jonathan

    2015-01-01

    Purpose: The increasing role of systematic reviews in knowledge production demands greater rigor in the literature search process. The performance of the Social Work Abstracts (SWA) database has been examined multiple times over the past three decades. The current study is a replication within this line of research. Method: Issue-level coverage…

  15. Personality and Academic Motivation: Replication, Extension, and Replication

    ERIC Educational Resources Information Center

    Jones, Martin H.; McMichael, Stephanie N.

    2015-01-01

    Previous work examines the relationships between personality traits and intrinsic/extrinsic motivation. We replicate and extend previous work to examine how personality may relate to achievement goals, efficacious beliefs, and mindset about intelligence. Approximately 200 undergraduates responded to the survey with a 150 participants replicating…

  16. Metadata based management and sharing of distributed biomedical data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Liu, Peiya

    2014-01-01

    Biomedical research data sharing is becoming increasingly important for researchers to reuse experiments, pool expertise and validate approaches. However, there are many hurdles for data sharing, including the unwillingness to share, lack of flexible data model for providing context information, difficulty to share syntactically and semantically consistent data across distributed institutions, and high cost to provide tools to share the data. SciPort is a web-based collaborative biomedical data sharing platform to support data sharing across distributed organisations. SciPort provides a generic metadata model to flexibly customise and organise the data. To enable convenient data sharing, SciPort provides a central server based data sharing architecture with a one-click data sharing from a local server. To enable consistency, SciPort provides collaborative distributed schema management across distributed sites. To enable semantic consistency, SciPort provides semantic tagging through controlled vocabularies. SciPort is lightweight and can be easily deployed for building data sharing communities. PMID:24834105

  17. Establishment of Kawasaki disease database based on metadata standard

    PubMed Central

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-01-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients’ clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients’ blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported. Database URL: http://www.kawasakidisease.kr PMID:27630202

  18. Data to Pictures to Data: Outreach Imaging Software and Metadata

    NASA Astrophysics Data System (ADS)

    Levay, Z.

    2011-07-01

    A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.

  19. Establishment of Kawasaki disease database based on metadata standard

    PubMed Central

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-01-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients’ clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients’ blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported. Database URL: http://www.kawasakidisease.kr

  20. Updated population metadata for United States historical climatology network stations

    USGS Publications Warehouse

    Owen, T.W.; Gallo, K.P.

    2000-01-01

    The United States Historical Climatology Network (HCN) serial temperature dataset is comprised of 1221 high-quality, long-term climate observing stations. The HCN dataset is available in several versions, one of which includes population-based temperature modifications to adjust urban temperatures for the "heat-island" effect. Unfortunately, the decennial population metadata file is not complete as missing values are present for 17.6% of the 12 210 population values associated with the 1221 individual stations during the 1900-90 interval. Retrospective grid-based populations. Within a fixed distance of an HCN station, were estimated through the use of a gridded population density dataset and historically available U.S. Census county data. The grid-based populations for the HCN stations provide values derived from a consistent methodology compared to the current HCN populations that can vary as definitions of the area associated with a city change over time. The use of grid-based populations may minimally be appropriate to augment populations for HCN climate stations that lack any population data, and are recommended when consistent and complete population data are required. The recommended urban temperature adjustments based on the HCN and grid-based methods of estimating station population can be significantly different for individual stations within the HCN dataset.