Sample records for cf metadata conventions

  1. Metadata in the Wild: An Empirical Survey of OPeNDAP-accessible Metadata and its Implications for Discovery

    NASA Astrophysics Data System (ADS)

    Hardy, D.; Janée, G.; Gallagher, J.; Frew, J.; Cornillon, P.

    2006-12-01

    The OPeNDAP Data Access Protocol (DAP) is a community standard for sharing scientific data across the Internet. Data providers using DAP have adopted a variety of metadata conventions to improve data utility, such as COARDS (1995) and CF (2003). Our results show, however, that metadata do not follow these conventions in practice. We collected metadata from over a hundred DAP servers, tens of thousands of data objects, and hundreds of collections. We found that a minority claim to adhere to a metadata convention, and a small percentage accurately adhere to their stated convention. We present descriptive statistics of our survey and highlight common traits such as well-populated attributes. Our empirical results indicate that unified search services cannot rely solely on metadata conventions. Although we encourage all providers to adopt a small subset of the CF convention for discovery purposes, we have no evidence to suggest that improved conventions would simplify the fundamental problem of heterogeneity. Large-scale discovery services must find methods for integrating incompatible metadata.

  2. CF Metadata Conventions: Founding Principles, Governance, and Future Directions

    NASA Astrophysics Data System (ADS)

    Taylor, K. E.

    2016-12-01

    The CF Metadata Conventions define attributes that promote sharing of climate and forecasting data and facilitate automated processing by computers. The development, maintenance, and evolution of the conventions have mainly been provided by voluntary community contributions. Nevertheless, an organizational framework has been established, which relies on established rules and web-based discussion to ensure smooth (but relatively efficient) evolution of the standard to accommodate new types of data. The CF standard has been essential to the success of high-profile internationally-coordinated modeling activities (e.g, the Coupled Model Intercomparison Project). A summary of CF's founding principles and the prospects for its future evolution will be discussed.

  3. A data model of the Climate and Forecast metadata conventions (CF-1.6) with a software implementation (cf-python v2.1)

    NASA Astrophysics Data System (ADS)

    Hassell, David; Gregory, Jonathan; Blower, Jon; Lawrence, Bryan N.; Taylor, Karl E.

    2017-12-01

    The CF (Climate and Forecast) metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF) files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.

  4. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    NASA Technical Reports Server (NTRS)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  5. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    NASA Astrophysics Data System (ADS)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  6. Improving Metadata Compliance for Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be easily adapted to other satellite missions as well. Overall, we hope this tool will provide the community with a useful mechanism to improve metadata quality and consistency at the granule level by providing objective scoring and assessment, as well as encourage data producers to improve metadata quality and quantity.

  7. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    NASA Astrophysics Data System (ADS)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  8. Usability and Interoperability Improvements for an EASE-Grid 2.0 Passive Microwave Data Product Using CF Conventions

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Historical versions of the gridded passive microwave data sets were produced as flat binary files described in human-readable documentation. This format is error-prone and makes it difficult to reliably include all processing and provenance. Funded by NASA MEaSUREs, we have completely reprocessed the gridded data record that includes SMMR, SSM/I-SSMIS and AMSR-E. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) files are self-describing. Our approach to the new data set was to create netCDF4 files that use standard metadata conventions and best practices to incorporate file-level, machine- and human-readable contents, geolocation, processing and provenance metadata. We followed the flexible and adaptable Climate and Forecast (CF-1.6) Conventions with respect to their coordinate conventions and map projection parameters. Additionally, we made use of Attribute Conventions for Dataset Discovery (ACDD-1.3) that provided file-level conventions with spatio-temporal bounds that enable indexing software to search for coverage. Our CETB files also include temporal coverage and spatial resolution in the file-level metadata for human-readability. We made use of the JPL CF/ACDD Compliance Checker to guide this work. We tested our file format with real software, for example, netCDF Command-line Operators (NCO) power tools for unlimited control on spatio-temporal subsetting and concatenation of files. The GDAL tools understand the CF metadata and produce fully-compliant geotiff files from our data. ArcMap can then reproject the geotiff files on-the-fly and work with other geolocated data such as coastlines, with no special work required. We expect this combination of standards and well-tested interoperability to significantly improve the usability of this important ESDR for the Earth Science community.

  9. Collaborative Sharing of Multidimensional Space-time Data Using HydroShare

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Idaszak, R.; Yi, H.; Blanton, B.

    2015-12-01

    HydroShare is a collaborative environment being developed for sharing hydrological data and models. It includes capability to upload data in many formats as resources that can be shared. The HydroShare data model for resources uses a specific format for the representation of each type of data and specifies metadata common to all resource types as well as metadata unique to specific resource types. The Network Common Data Form (NetCDF) was chosen as the format for multidimensional space-time data in HydroShare. NetCDF is widely used in hydrological and other geoscience modeling because it contains self-describing metadata and supports the creation of array-oriented datasets that may include three spatial dimensions, a time dimension and other user defined dimensions. For example, NetCDF may be used to represent precipitation or surface air temperature fields that have two dimensions in space and one dimension in time. This presentation will illustrate how NetCDF files are used in HydroShare. When a NetCDF file is loaded into HydroShare, header information is extracted using the "ncdump" utility. Python functions developed for the Django web framework on which HydroShare is based, extract science metadata present in the NetCDF file, saving the user from having to enter it. Where the file follows Climate Forecast (CF) convention and Attribute Convention for Dataset Discovery (ACDD) standards, metadata is thus automatically populated. Users also have the ability to add metadata to the resource that may not have been present in the original NetCDF file. HydroShare's metadata editing functionality then writes this science metadata back into the NetCDF file to maintain consistency between the science metadata in HydroShare and the metadata in the NetCDF file. This further helps researchers easily add metadata information following the CF and ACDD conventions. Additional data inspection and subsetting functions were developed, taking advantage of Python and command line libraries for working with NetCDF files. We describe the design and implementation of these features and illustrate how NetCDF files from a modeling application may be curated in HydroShare and thus enhance reproducibility of the associated research. We also discuss future development planned for multidimensional space-time data in HydroShare.

  10. Realising the Benefits of Adopting and Adapting Existing CF Metadata Conventions to a Broader Range of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Druken, K. A.; Trenham, C. E.; Wang, J.; Bastrakova, I.; Evans, B. J. K.; Wyborn, L. A.; Ip, A. I.; Poudjom Djomani, Y.

    2016-12-01

    The National Computational Infrastructure (NCI) hosts one of Australia's largest repositories (10+ PBytes) of research data, colocated with a petascale High Performance Computer and a highly integrated research cloud. Key to maximizing benefit of NCI's collections and computational capabilities is ensuring seamless interoperable access to these datasets. This presents considerable data management challenges across the diverse range of geoscience data; spanning disciplines where netCDF-CF is commonly utilized (e.g., climate, weather, remote-sensing), through to the geophysics and seismology fields that employ more traditional domain- and study-specific data formats. These data are stored in a variety of gridded, irregularly spaced (i.e., trajectories, point clouds, profiles), and raster image structures. They often have diverse coordinate projections and resolutions, thus complicating the task of comparison and inter-discipline analysis. Nevertheless, much can be learned from the netCDF-CF model that has long served the climate community, providing a common data structure for the atmospheric, ocean and cryospheric sciences. We are extending the application of the existing Climate and Forecast (CF) metadata conventions to NCI's broader geoscience data collections. We present simple implementations that can significantly improve interoperability of the research collections, particularly in the case of line survey data. NCI has developed a compliance checker to assist with the data quality across all hosted netCDF-CF collections. The tool is an extension to one of the main existing CF Convention checkers, that we have modified to incorporate the Attribute Convention for Data Discovery (ACDD) and ISO19115 standards, and to perform parallelised checks over collections of files, ensuring compliance and consistency across the NCI data collections as a whole. It is complemented by a checker that also verifies functionality against a range of scientific analysis, programming, and data visualisation tools. By design, these tests are not necessarily domain-specific, and demonstrate that verified data is accessible to end-users, thus allowing for seamless interoperability with other datasets across a wide range of fields.

  11. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    NetCDF4 has become the dominant generic format for many forms of geoscientific data, leveraging (and constraining) the versatile HDF5 container format, while providing metadata conventions for interoperability. However, the encapsulation of detailed metadata within each file can lead to metadata "bloat", and difficulty in maintaining consistency where metadata is replicated to multiple locations. Complex conceptual relationships are also difficult to represent in simple key-value netCDF metadata. Linked Data provides a practical mechanism to address these issues by associating the netCDF files and their internal variables with complex metadata stored in Semantic Web vocabularies and ontologies, while complying with and complementing existing metadata conventions. One of the stated objectives of the netCDF4/HDF5 formats is that they should be self-describing: containing metadata sufficient for cataloguing and using the data. However, this objective can be regarded as only partially-met where details of conventions and definitions are maintained externally to the data files. For example, one of the most widely used netCDF community standards, the Climate and Forecasting (CF) Metadata Convention, maintains standard vocabularies for a broad range of disciplines across the geosciences, but this metadata is currently neither readily discoverable nor machine-readable. We have previously implemented useful Linked Data and netCDF tooling (ncskos) that associates netCDF files, and individual variables within those files, with concepts in vocabularies formulated using the Simple Knowledge Organization System (SKOS) ontology. NetCDF files contain Uniform Resource Identifier (URI) links to terms represented as SKOS Concepts, rather than plain-text representations of those terms, so we can use simple, standardised web queries to collect and use rich metadata for the terms from any Linked Data-presented SKOS vocabulary. Geoscience Australia (GA) manages a large volume of diverse geoscientific data, much of which is being translated from proprietary formats to netCDF at NCI Australia. This data is made available through the NCI National Environmental Research Data Interoperability Platform (NERDIP) for programmatic access and interdisciplinary analysis. The netCDF files contain both scientific data variables (e.g. gravity, magnetic or radiometric values), but also domain-specific operational values (e.g. specific instrument parameters) best described fully in formal vocabularies. Our ncskos codebase provides access to multiple stores of detailed external metadata in a standardised fashion. Geophysical datasets are generated from a "survey" event, and GA maintains corporate databases of all surveys and their associated metadata. It is impractical to replicate the full source survey metadata into each netCDF dataset so, instead, we link the netCDF files to survey metadata using public Linked Data URIs. These URIs link to Survey class objects which we model as a subclass of Activity objects as defined by the PROV Ontology, and we provide URI resolution for them via a custom Linked Data API which draws current survey metadata from GA's in-house databases. We have demonstrated that Linked Data is a practical way to associate netCDF data with detailed, external metadata. This allows us to ensure that catalogued metadata is kept consistent with metadata points-of-truth, and we can infer complex conceptual relationships not possible with netCDF key-value attributes alone.

  12. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  13. The UGRID Reader - A ParaView Plugin for the Visualization of Unstructured Climate Model Data in NetCDF Format

    NASA Astrophysics Data System (ADS)

    Brisc, Felicia; Vater, Stefan; Behrens, Joern

    2016-04-01

    We present the UGRID Reader, a visualization software component that implements the UGRID Conventions into Paraview. It currently supports the reading and visualization of 2D unstructured triangular, quadrilateral and mixed triangle/quadrilateral meshes, while the data can be defined per cell or per vertex. The Climate and Forecast Metadata Conventions (CF Conventions) have been set for many years as the standard framework for climate data written in NetCDF format. While they allow storing unstructured data simply as data defined at a series of points, they do not currently address the topology of the underlying unstructured mesh. However, it is often necessary to have additional mesh topology information, i.e. is it a one dimensional network, a 2D triangular mesh or a flexible mixed triangle/quadrilateral mesh, a 2D mesh with vertical layers, or a fully unstructured 3D mesh. The UGRID Conventions proposed by the UGRID Interoperability group are attempting to fill in this void by extending the CF Conventions with topology specifications. As the UGRID Conventions are increasingly popular with an important subset of the CF community, they warrant the development of a customized tool for the visualization and exploration of UGRID-conforming data. The implementation of the UGRID Reader has been designed corresponding to the ParaView plugin architecture. This approach allowed us to tap into the powerful reading and rendering capabilities of ParaView, while the reader is easy to install. We aim at parallelism to be able to process large data sets. Furthermore, our current application of the reader is the visualization of higher order simulation output which demands for a special representation of the data within a cell.

  14. Representing Simple Geometry Types in NetCDF-CF

    NASA Astrophysics Data System (ADS)

    Blodgett, D. L.; Koziol, B. W.; Whiteaker, T. L.; Simons, R.

    2016-12-01

    The Climate and Forecast (CF) metadata convention is well-suited for representing gridded and point-based observational datasets. However, CF currently has no accepted mechanism for representing simple geometry types such as lines and polygons. Lack of support for simple geometries within CF has unintentionally excluded a broad set of geoscientific data types from NetCDF-CF data encodings. For example, hydrologic datasets often contain polygon watershed catchments and polyline stream reaches in addition to point sampling stations and water management infrastructure. The latter has an associated CF specification. In the interest of supporting all simple geometry types within CF, a working group was formed following an EarthCube workshop on Advancing NetCDF-CF [1] to draft a CF specification for simple geometries: points, lines, polygons, and their associated multi-geometry representations [2]. The draft also includes parametric geometry types such as circles and ellipses. This presentation will provide an overview of the scope and content of the proposed specification focusing on mechanisms for representing coordinate arrays using variable length or continuous ragged arrays, capturing multi-geometries, and accounting for type-specific geometry artifacts such as polygon holes/interiors, node ordering, etc. The concepts contained in the specification proposal will be described with a use case representing streamflow in rivers and evapotranspiration from HUC12 watersheds. We will also introduce Python and R reference implementations developed alongside the technical specification. These in-development, open source Python and R libraries convert between commonly used GIS software objects (i.e. GEOS-based primitives) and their associated simple geometry CF representation. [1] http://www.unidata.ucar.edu/events/2016CFWorkshop/[2] https://github.com/bekozi/netCDF-CF-simple-geometry

  15. ERDDAP: Reducing Data Friction with an Open Source Data Platform

    NASA Astrophysics Data System (ADS)

    O'Brien, K.

    2017-12-01

    Data friction is not just an issue facing interdisciplinary research. Often times, even within disciplines, significant data friction can exist. Issues of differing formats, limited metadata and non-existent machine-to-machine data access are all issues that exist within disciplines and make it that much harder for successful interdisciplinary cooperation. Therefore, reducing data friction within disciplines is crucial first step in providing better overall collaboration. ERDDAP, an open source data platform developed at NOAA's Southwest Fisheries Center, is well poised to improve data useability and understanding and reduce data friction, both in single and multi-disciplinary research. By virtue of its ability to integrate data of varying formats and provide RESTful-based user access to data and metadata, use of ERDDAP has grown substantially throughout the ocean data community. ERDDAP also supports standards such as the DAP data protocol, the Climate and Forecast (CF) metadata conventions and the Bagit document standard for data archival. In this presentation, we will discuss the advantages of using ERDDAP as a data platform. We will also show specific use cases where utilizing ERDDAP has reduced friction within a single discipline (physical oceanography) and improved interdisciplinary collaboration as well.

  16. Web Services as Building Blocks for an Open Coastal Observing System

    NASA Astrophysics Data System (ADS)

    Breitbach, G.; Krasemann, H.

    2012-04-01

    In coastal observing systems it is needed to integrate different observing methods like remote sensing, in-situ measurements, and models into a synoptic view of the state of the observed region. This integration can be based solely on web services combining data and metadata. Such an approach is pursued for COSYNA (Coastal Observing System for Northern and Artic seas). Data from satellite and radar remote sensing, measurements of buoys, stations and Ferryboxes are the observation part of COSYNA. These data are assimilated into models to create pre-operational forecasts. For discovering data an OGC Web Feature Service (WFS) is used by the COSYNA data portal. This Web Feature Service knows the necessary metadata not only for finding data, but in addition the URLs of web services to view and download the data. To make the data from different resources comparable a common vocabulary is needed. For COSYNA the standard names from CF-conventions are stored within the metadata whenever possible. For the metadata an INSPIRE and ISO19115 compatible data format is used. The WFS is fed from the metadata-system using database-views. Actual data are stored in two different formats, in NetCDF-files for gridded data and in an RDBMS for time-series-like data. The web service URLs are mostly standard based the standards are mainly OGC standards. Maps were created from netcdf files with the help of the ncWMS tool whereas a self-developed java servlet is used for maps of moving measurement platforms. In this case download of data is offered via OGC SOS. For NetCDF-files OPeNDAP is used for the data download. The OGC CSW is used for accessing extended metadata. The concept of data management in COSYNA will be presented which is independent of the special services used in COSYNA. This concept is parameter and data centric and might be useful for other observing systems.

  17. Air Quality uFIND: User-oriented Tool Set for Air Quality Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Hoijarvi, K.; Robinson, E. M.; Husar, R. B.; Falke, S. R.; Schultz, M. G.; Keating, T. J.

    2012-12-01

    Historically, there have been major impediments to seamless and effective data usage encountered by both data providers and users. Over the last five years, the international Air Quality (AQ) Community has worked through forums such as the Group on Earth Observations AQ Community of Practice, the ESIP AQ Working Group, and the Task Force on Hemispheric Transport of Air Pollution to converge on data format standards (e.g., netCDF), data access standards (e.g., Open Geospatial Consortium Web Coverage Services), metadata standards (e.g., ISO 19115), as well as other conventions (e.g., CF Naming Convention) in order to build an Air Quality Data Network. The centerpiece of the AQ Data Network is the web service-based tool set: user-oriented Filtering and Identification of Networked Data. The purpose of uFIND is to provide rich and powerful facilities for the user to: a) discover and choose a desired dataset by navigation through the multi-dimensional metadata space using faceted search, b) seamlessly access and browse datasets, and c) use uFINDs facilities as a web service for mashups with other AQ applications and portals. In a user-centric information system such as uFIND, the user experience is improved by metadata that includes the general fields for discovery as well as community-specific metadata to narrow the search beyond space, time and generic keyword searches. However, even with the community-specific additions, the ISO 19115 records were formed in compliance with the standard, so that other standards-based search interface could leverage this additional information. To identify the fields necessary for metadata discovery we started with the ISO 19115 Core Metadata fields and fields that were needed for a Catalog Service for the Web (CSW) Record. This fulfilled two goals - one to create valid ISO 19115 records and the other to be able to retrieve the records through a Catalog Service for the Web query. Beyond the required set of fields, the AQ Community added additional fields using a combination of keywords and ISO 19115 fields. These extensions allow discovery by measurement platform or observed phenomena. Beyond discovery metadata, the AQ records include service identification objects that allow standards-based clients, such as some brokers, to access the data found via OGC WCS or WMS data access protocols. uFIND, is one such smart client, this combination of discovery and access metadata allows the user to preview each registered dataset through spatial and temporal views; observe the data access and usage pattern and also find links to dataset-specific metadata directly in uFIND. The AQ data providers also benefit from this architecture since their data products are easier to find and re-use, enhancing the relevance and importance of their products. Finally, the earth science community at large benefits from the Service Oriented Architecture of uFIND, since it is a service itself and allows service-based interfacing with providers and users of the metadata, allowing uFIND facets to be further refined for a particular AQ application or completely repurposed for other Earth Science domains that use the same set of data access and metadata standards.

  18. CfRadial - CF NetCDF for Radar and Lidar Data in Polar Coordinates.

    NASA Astrophysics Data System (ADS)

    Dixon, M. J.; Lee, W. C.; Michelson, D.; Curtis, M.

    2016-12-01

    Since 1990, NCAR has supported over 20 different data formats for radar and lidar data in polar coordinates. Researchers, students and operational users spend unnecessary time handling a multitude of unique formats. CfRadial grew out of the need to simplify the use of these data and thereby to improve efficiency in research and operations. CfRadial adopts the well-known NetCDF framework, along with the Climate and Forecasting (CF) conventions such that data and metadata are accurately represented. Mobile platforms are also supported. The first major release, CfRadial version 1.1, occurred in February 2011, followed by minor updates. CfRadial has been adopted by NCAR as well as other agencies in the US and the UK. CfRadial development was boosted in 2015 through a two-year NSF EarthCube grant to improve CF in general. Version 1.4 was agreed upon in May 2016, adding explicit support for quality control fields and spectra. In Europe and Australia, EUMETNET OPERA's HDF5-based ODIM_H5 standard has been rapidly embraced as the modern standard for exchanging weather radar data for operations. ODIM_H5 exploits data groups, hierarchies, and built-in compression, characteristics that have been added to NetCDF4. A meeting of the WMO Task Team on Weather Radar Data Exchange (TT-WRDE) was held at NCAR in Boulder in July 2016, with a goal of identifying a single global standard for radar and lidar data in polar coordinates. CfRadial and ODIM_H5 were considered alongside the older and more rigid table-driven WMO BUFR and GRIB2 formats. TT-WRDE recommended that CfRadial 1.4 be merged with the sweep-oriented structure of ODIM_H5, making use of NetCDF groups, to produce a single format that will encompass the best ideas of both formats. That has led to the emergence of the CfRadial 2.0 standard. This format should meet the objectives of both the NSF EarthCube CF 2.0 initiative and the WMO TT-WRDE. It has the added benefit of improving data exchange between operational and research users, making operational data more readily available to researchers, and research algorithms more accessible to operational agencies.

  19. Scientific Platform as a Service - Tools and solutions for efficient access to and analysis of oceanographic data

    NASA Astrophysics Data System (ADS)

    Vines, Aleksander; Hansen, Morten W.; Korosov, Anton

    2017-04-01

    Existing infrastructure international and Norwegian projects, e.g., NorDataNet, NMDC and NORMAP, provide open data access through the OPeNDAP protocol following the conventions for CF (Climate and Forecast) metadata, designed to promote the processing and sharing of files created with the NetCDF application programming interface (API). This approach is now also being implemented in the Norwegian Sentinel Data Hub (satellittdata.no) to provide satellite EO data to the user community. Simultaneously with providing simplified and unified data access, these projects also seek to use and establish common standards for use and discovery metadata. This then allows development of standardized tools for data search and (subset) streaming over the internet to perform actual scientific analysis. A combinnation of software tools, which we call a Scientific Platform as a Service (SPaaS), will take advantage of these opportunities to harmonize and streamline the search, retrieval and analysis of integrated satellite and auxiliary observations of the oceans in a seamless system. The SPaaS is a cloud solution for integration of analysis tools with scientific datasets via an API. The core part of the SPaaS is a distributed metadata catalog to store granular metadata describing the structure, location and content of available satellite, model, and in situ datasets. The analysis tools include software for visualization (also online), interactive in-depth analysis, and server-based processing chains. The API conveys search requests between system nodes (i.e., interactive and server tools) and provides easy access to the metadata catalog, data repositories, and the tools. The SPaaS components are integrated in virtual machines, of which provisioning and deployment are automatized using existing state-of-the-art open-source tools (e.g., Vagrant, Ansible, Docker). The open-source code for scientific tools and virtual machine configurations is under version control at https://github.com/nansencenter/, and is coupled to an online continuous integration system (e.g., Travis CI).

  20. Pragmatic Metadata Management for Integration into Multiple Spatial Data Infrastructure Systems and Platforms

    NASA Astrophysics Data System (ADS)

    Benedict, K. K.; Scott, S.

    2013-12-01

    While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into multiple data infrastructures, as has been demonstrated through EDAC's testing and deployment of metadata into multiple external systems: Data.Gov, the GEOSS Registry, the DataONE network, the DSpace based institutional repository at UNM and semantic mediation systems developed as part of the NASA ACCESS ELSeWEB project. Each of these systems requires valid metadata as a first step, but to make most effective use of the delivered metadata each also has a set of conventions that are specific to the system. This presentation will provide an overview of the underlying metadata management model, the processes and web services that have been developed to automatically generate metadata in a variety of standard formats and highlight some of the specific modifications made to the output metadata content to support the different conventions used by the multiple metadata integration endpoints.

  1. Incorporating ISO Metadata Using HDF Product Designer

    NASA Technical Reports Server (NTRS)

    Jelenak, Aleksandar; Kozimor, John; Habermann, Ted

    2016-01-01

    The need to store in HDF5 files increasing amounts of metadata of various complexity is greatly overcoming the capabilities of the Earth science metadata conventions currently in use. Data producers until now did not have much choice but to come up with ad hoc solutions to this challenge. Such solutions, in turn, pose a wide range of issues for data managers, distributors, and, ultimately, data users. The HDF Group is experimenting on a novel approach of using ISO 19115 metadata objects as a catch-all container for all the metadata that cannot be fitted into the current Earth science data conventions. This presentation will showcase how the HDF Product Designer software can be utilized to help data producers include various ISO metadata objects in their products.

  2. Using GDAL to Convert NetCDF 4 CF 1.6 to GeoTIFF: Interoperability Problems and Solutions for Data Providers and Distributors

    NASA Astrophysics Data System (ADS)

    Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.

    2015-12-01

    An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability. Differencesbetween data file contents and software package expectations areexposed, affording an opportunity to improve conformance of software,data or both. The incident can also serve as a demonstration that dataproviders, distributors, and users can work together to improve dataproduct quality and interoperability.

  3. NetCDF-CF: Supporting Earth System Science with Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Davis, E.; Zender, C. S.; Arctur, D. K.; O'Brien, K.; Jelenak, A.; Santek, D.; Dixon, M. J.; Whiteaker, T. L.; Yang, K.

    2017-12-01

    NetCDF-CF is a community-developed convention for storing and describing earth system science data in the netCDF binary data format. It is an OGC recognized standard with numerous existing FOSS (Free and Open Source Software) and commercial software tools can explore, analyze, and visualize data that is stored and described as netCDF-CF data. To better support a larger segment of the earth system science community, a number of efforts are underway to extend the netCDF-CF convention with the goal of increasing the types of data that can be represented as netCDF-CF data. This presentation will provide an overview and update of work to extend the existing netCDF-CF convention. It will detail the types of earth system science data currently supported by netCDF-CF and the types of data targeted for support by current netCDF-CF convention development efforts. It will also describe some of the tools that support the use of netCDF-CF compliant datasets, the types of data they support, and efforts to extend them to handle the new data types that netCDF-CF will support.

  4. Standardised online data access and publishing for Earth Systems and Climate data in Australia

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Druken, K. A.; Trenham, C.; Wang, J.; Wyborn, L. A.; Smillie, J.; Allen, C.; Porter, D.

    2015-12-01

    The National Computational Infrastructure (NCI) hosts Australia's largest repository (10+ PB) of research data collections spanning a wide range of fields from climate, coasts, oceans, and geophysics through to astronomy, bioinformatics, and the social sciences. Spatial scales range from global to local ultra-high resolution, requiring storage volumes from MB to PB. The data have been organised to be highly connected to both the NCI HPC and cloud resources (e.g., interactive visualisation and analysis environments). Researchers can login to utilise the high performance infrastructure for these data collections, or access the data via standards-based web services. Our aim is to provide a trusted platform to support interdisciplinary research across all the collections as well as services for use of the data within individual communities. We thus cater to a wide range of researcher needs, whilst needing to maintain a consistent approach to data management and publishing. All research data collections hosted at NCI are governed by a data management plan, prior to being published through a variety of platforms and web services such as OPeNDAP, HTTP, and WMS. The data management plan ensures the use of standard formats (when available) that comply with relevant data conventions (e.g., CF-Convention) and metadata standards (e.g., ISO19115). Digital Object Identifiers (DOIs) can be minted at NCI and assigned to datasets and collections. Large scale data growth and use in a variety of research fields has led to a rise in, and acceptance of, open spatial data formats such as NetCDF4/HDF5, prompting a need to extend these data conventions to fields such as geophysics and satellite Earth observations. The fusion of DOI-minted data that is discoverable and accessible via metadata and web services, creates a complete picture of data hosting, discovery, use, and citation. This enables standardised and reproducible data analysis.

  5. Strategies for Implementing Cell-Free DNA Testing.

    PubMed

    Cuckle, Howard

    2016-06-01

    Maternal plasma cell-free (cf) DNA testing has higher discriminatory power for aneuploidy than any conventional multi-marker screening test. Several strategies have been suggested for introducing it into clinical practice. Secondary cfDNA, restricted only to women with positive conventional screening test, is generally cost saving and minimizes the need for invasive prenatal diagnosis but leads to a small loss in detection. Primary cfDNA, replacing conventional screening or retaining the nuchal translucency scan, is not currently cost-effective for third-party payers. Contingent cfDNA, testing about 20% of women with the highest risks based on a conventional test, is the preferred approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Log-less metadata management on metadata server for parallel file systems.

    PubMed

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  7. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    PubMed Central

    Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093

  8. Evolving Metadata in NASA Earth Science Data Systems

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of NASA's effort to continually evolve its data systems led ECHO to enhancing the method in which it receives inventory metadata from the data centers to allow for multiple metadata formats including ISO 19115. ECHO's metadata model will also be mapped to the NASA-specific convention for ingesting science metadata into the ECHO system. As NASA's new Earth Science missions and data centers are migrating to the ISO 19115 standards, EOSDIS is developing metadata management resources to assist in the reading, writing and parsing ISO 19115 compliant metadata. To foster interoperability with other agencies and international partners, NASA is working to ensure that a common ISO 19115 convention is developed, enhancing data sharing capabilities and other data analysis initiatives. NASA is also investigating the use of ISO 19115 standards to encode data quality, lineage and provenance with stored values. A common metadata standard across NASA's Earth Science data systems promotes interoperability, enhances data utilization and removes levels of uncertainty found in data products.

  9. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for intercomparison. The optimal data transfer protocol is being investigated to insure the interoperability. To facilitate satellite and model validation, tools will be made available for co-location and comparison with IAGOS. We will enhance the JOIN application in order to properly display aircraft data as vertical profiles and along individual flight tracks and to allow for graphical comparison to model results that are accessible through interoperable web services, such as the daily products from the GMES/Copernicus atmospheric service.

  10. Rosetta: Ensuring the Preservation and Usability of ASCII-based Data into the Future

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Arms, S. C.

    2015-12-01

    Field data obtained from dataloggers often take the form of comma separated value (CSV) ASCII text files. While ASCII based data formats have positive aspects, such as the ease of accessing the data from disk and the wide variety of tools available for data analysis, there are some drawbacks, especially when viewing the situation through the lens of data interoperability and stewardship. The Unidata data translation tool, Rosetta, is a web-based service that provides an easy, wizard-based interface for data collectors to transform their datalogger generated ASCII output into Climate and Forecast (CF) compliant netCDF files following the CF-1.6 discrete sampling geometries. These files are complete with metadata describing what data are contained in the file, the instruments used to collect the data, and other critical information that otherwise may be lost in one of many README files. The choice of the machine readable netCDF data format and data model, coupled with the CF conventions, ensures long-term preservation and interoperability, and that future users will have enough information to responsibly use the data. However, with the understanding that the observational community appreciates the ease of use of ASCII files, methods for transforming the netCDF back into a CSV or spreadsheet format are also built-in. One benefit of translating ASCII data into a machine readable format that follows open community-driven standards is that they are instantly able to take advantage of data services provided by the many open-source data server tools, such as the THREDDS Data Server (TDS). While Rosetta is currently a stand-alone service, this talk will also highlight efforts to couple Rosetta with the TDS, thus allowing self-publishing of thoroughly documented datasets by the data producers themselves.

  11. A Free and Open Source Web-based Data Catalog Evaluation Tool

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Burger, E. F.

    2015-12-01

    For many years, the Unified Access Framework (UAF) project has worked to provide improved access to scientific data by leveraging widely used data standards and conventions. These standards include the Climate and Forecast (CF) metadata conventions, the Data Access Protocol (DAP) and various Open Geospatial Consortium (OGC) standards such as WMS and WCS. The UAF has also worked to create a unified access point for scientific data access through THREDDS and ERDDAP catalogs. A significant effort was made by the UAF project to build a catalog-crawling tool that was designed to crawl remote catalogs, analyze their content and then build a clean catalog that 1) represented only CF compliant data; 2) provided a uniform set of access services and 3) where possible, aggregated data in time. That catalog is available at http://ferret.pmel.noaa.gov/geoide/geoIDECleanCatalog.html.Although this tool has proved immensely valuable in allowing the UAF project to create a high quality data catalog, the need for a catalog evaluation service or tool to operate on a more local level also exists. Many programs that generate data of interest to the public are recognizing the utility and power of using the THREDDS data server (TDS) to serve that data. However, for some groups that lack the resources to maintain dedicated IT personnel, it can be difficult to set up a properly configured TDS. The TDS catalog evaluating service that is under development and will be discussed in this presentation is an effort, through the UAF project, to bridge that gap. Based upon the power of the original UAF catalog cleaner, the web evaluator will have the ability to scan and crawl a local TDS catalog, evaluate the contents for compliance with CF standards, analyze the services offered, and identify datasets where possible temporal aggregation would benefit data access. The results of the catalog evaluator will guide the configuration of the dataset in TDS to ensure that it meets the standards as promoted by the UAF framework.

  12. Collaboration tools and techniques for large model datasets

    USGS Publications Warehouse

    Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.

    2008-01-01

    In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.

  13. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  14. Challenges to Standardization: A Case Study Using Coastal and Deep-Ocean Water Level Data

    NASA Astrophysics Data System (ADS)

    Sweeney, A. D.; Stroker, K. J.; Mungov, G.; McLean, S. J.

    2015-12-01

    Sea levels recorded at coastal stations and inferred from deep-ocean pressure observations at the seafloor are submitted for archive in multiple data and metadata formats. These formats include two forms of schema-less XML and a custom binary format accompanied by metadata in a spreadsheet. The authors report on efforts to use existing standards to make this data more discoverable and more useful beyond their initial use in detecting tsunamis. An initial review of data formats for sea level data around the globe revealed heterogeneity in presentation and content. In the absence of a widely-used domain-specific format, we adopted the general model for structuring data and metadata expressed by the Network Common Data Form (netCDF). netCDF has been endorsed by the Open Geospatial Consortium and has the advantages of small size when compared to equivalent plain text representation and provides a standard way of embedding metadata in the same file. We followed the orthogonal time-series profile of the Climate and Forecast discrete sampling geometries as the convention for structuring the data and describing metadata relevant for use. We adhered to the Attribute Convention for Data Discovery for capturing metadata to support user search. Beyond making it possible to structure data and metadata in a standard way, netCDF is supported by multiple software tools in providing programmatic cataloging, access, subsetting, and transformation to other formats. We will describe our successes and failures in adhering to existing standards and provide requirements for either augmenting existing conventions or developing new ones. Some of these enhancements are specific to sea level data, while others are applicable to time-series data in general.

  15. EnviroAtlas One Meter Resolution Urban Land Cover Data (2008-2012) Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The EnviroAtlas One Meter-scale Urban Land Cover (MULC) Data were generated individually for each EnviroAtlas community. Source imagery varies by community. Land cover classes mapped also vary by community and include the following: water, impervious surfaces, soil and barren land, trees, shrub, grass and herbaceous, agriculture, orchards, woody wetlands, and emergent wetlands. Accuracy assessments were completed for each community's classification. For specific information about methods and accuracy of each community's land cover classification, consult their individual metadata records: Austin, TX (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B91A32A9D-96F5-4FA0-BC97-73BAD5D1F158%7D); Cleveland, OH (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B82ab1edf-8fc8-4667-9c52-5a5acffffa34%7D); Des Moines, IA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BA4152198-978D-4C0B-959F-42EABA9C4E1B%7D); Durham, NC (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B2FF66877-A037-4693-9718-D1870AA3F084%7D); Fresno, CA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B87041CF3-05BC-43C3-82DA-F066267C9871%7D); Green Bay, WI (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BD602E7C9-7F53-4C24

  16. Incorporating clinical metadata with digital image features for automated identification of cutaneous melanoma.

    PubMed

    Liu, Z; Sun, J; Smith, M; Smith, L; Warr, R

    2013-11-01

    Computer-assisted diagnosis (CAD) of malignant melanoma (MM) has been advocated to help clinicians to achieve a more objective and reliable assessment. However, conventional CAD systems examine only the features extracted from digital photographs of lesions. Failure to incorporate patients' personal information constrains the applicability in clinical settings. To develop a new CAD system to improve the performance of automatic diagnosis of melanoma, which, for the first time, incorporates digital features of lesions with important patient metadata into a learning process. Thirty-two features were extracted from digital photographs to characterize skin lesions. Patients' personal information, such as age, gender and, lesion site, and their combinations, was quantified as metadata. The integration of digital features and metadata was realized through an extended Laplacian eigenmap, a dimensionality-reduction method grouping lesions with similar digital features and metadata into the same classes. The diagnosis reached 82.1% sensitivity and 86.1% specificity when only multidimensional digital features were used, but improved to 95.2% sensitivity and 91.0% specificity after metadata were incorporated appropriately. The proposed system achieves a level of sensitivity comparable with experienced dermatologists aided by conventional dermoscopes. This demonstrates the potential of our method for assisting clinicians in diagnosing melanoma, and the benefit it could provide to patients and hospitals by greatly reducing unnecessary excisions of benign naevi. This paper proposes an enhanced CAD system incorporating clinical metadata into the learning process for automatic classification of melanoma. Results demonstrate that the additional metadata and the mechanism to incorporate them are useful for improving CAD of melanoma. © 2013 British Association of Dermatologists.

  17. Improving Earth Science Metadata: Modernizing ncISO

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Schweitzer, R.; Neufeld, D.; Burger, E. F.; Signell, R. P.; Arms, S. C.; Wilcox, K.

    2016-12-01

    ncISO is a package of tools developed at NOAA's National Center for Environmental Information (NCEI) that facilitates the generation of ISO 19115-2 metadata from NetCDF data sources. The tool currently exists in two iterations: a command line utility and a web-accessible service within the THREDDS Data Server (TDS). Several projects, including NOAA's Unified Access Framework (UAF), depend upon ncISO to generate the ISO-compliant metadata from their data holdings and use the resulting information to populate discovery tools such as NCEI's ESRI Geoportal and NOAA's data.noaa.gov CKAN system. In addition to generating ISO 19115-2 metadata, the tool calculates a rubric score based on how well the dataset follows the Attribute Conventions for Dataset Discovery (ACDD). The result of this rubric calculation, along with information about what has been included and what is missing is displayed in an HTML document generated by the ncISO software package. Recently ncISO has fallen behind in terms of supporting updates to conventions such updates to the ACDD. With the blessing of the original programmer, NOAA's UAF has been working to modernize the ncISO software base. In addition to upgrading ncISO to utilize version1.3 of the ACDD, we have been working with partners at Unidata and IOOS to unify the tool's code base. In essence, we are merging the command line capabilities into the same software that will now be used by the TDS service, allowing easier updates when conventions such as ACDD are updated in the future. In this presentation, we will discuss the work the UAF project has done to support updated conventions within ncISO, as well as describe how the updated tool is helping to improve metadata throughout the earth and ocean sciences.

  18. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    NASA Astrophysics Data System (ADS)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded the metadata from the netCDF files into a mySQL database. The combination of the mySQL database and the controlled names makes it possible to automate the construction of user interfaces and standard format metadata descriptions, like Federal Geographic Data Committee (FGDC) and Directory Interchange Format (DIF). These standard descriptions also include an association between our controlled names and standard keywords such as those developed by the Global Change Master Directory (GCMD). This talk will give an overview of each of these technology and metadata standards as it applies to work at the Climate Diagnostics Center. The talk will also discuss the pros and cons of each approach and discuss areas for future development.

  19. EnviroAtlas Estimated Percent Tree Cover Along Walkable Roads Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas dataset estimates tree cover along walkable roads. The road width is estimated for each road and percent tree cover is calculated in a 8.5 meter strip beginning at the estimated road edge. Percent tree cover is calculated for each block between road intersections. Tree cover provides valuable benefits to neighborhood residents and walkers by providing shade, improved aesthetics, and outdoor gathering spaces. For specific information about each community's Estimated Percent Tree Cover Along Walkable Roads layer, consult their individual metadata records: Austin, TX (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B4876FD99-C14A-464A-9E31-5CB5F2225687%7D); Cleveland, OH (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B28e3f937-6f22-45c5-98cf-1707b0fc92df%7D); Des Moines, IA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B09FE7D60-B636-405C-BB07-68147DFE8CAF%7D); Durham, NC (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BF341A26B-4972-4C6B-B675-9B5E02F4F25F%7D); Fresno, CA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BB71334B9-C53A-4674-A739-1031969E5163%7D); Green Bay, WI (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BB9AFEBED-9C29-4DB0-8B54-0CAF58BE5A2D%7D); Memphis, TN (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BBE552E7A-A789-4AA9-ADF9-234109C6517E%7D); Mi

  20. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4.

    PubMed

    Schober, Daniel; Tudose, Ilinca; Svatek, Vojtech; Boeker, Martin

    2012-09-21

    Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers.

  1. Verification of a New NOAA/NSIDC Passive Microwave Sea-Ice Concentration Climate Record

    NASA Technical Reports Server (NTRS)

    Meier, Walter N.; Peng, Ge; Scott, Donna J.; Savoie, Matt H.

    2014-01-01

    A new satellite-based passive microwave sea-ice concentration product developed for the National Oceanic and Atmospheric Administration (NOAA)Climate Data Record (CDR) programme is evaluated via comparison with other passive microwave-derived estimates. The new product leverages two well-established concentration algorithms, known as the NASA Team and Bootstrap, both developed at and produced by the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC). The sea ice estimates compare well with similar GSFC products while also fulfilling all NOAA CDR initial operation capability (IOC) requirements, including (1) self describing file format, (2) ISO 19115-2 compliant collection-level metadata,(3) Climate and Forecast (CF) compliant file-level metadata, (4) grid-cell level metadata (data quality fields), (5) fully automated and reproducible processing and (6) open online access to full documentation with version control, including source code and an algorithm theoretical basic document. The primary limitations of the GSFC products are lack of metadata and use of untracked manual corrections to the output fields. Smaller differences occur from minor variations in processing methods by the National Snow and Ice Data Center (for the CDR fields) and NASA (for the GSFC fields). The CDR concentrations do have some differences from the constituent GSFC concentrations, but trends and variability are not substantially different.

  2. Intelligent Discovery for Learning Objects Using Semantic Web Technologies

    ERIC Educational Resources Information Center

    Hsu, I-Ching

    2012-01-01

    The concept of learning objects has been applied in the e-learning field to promote the accessibility, reusability, and interoperability of learning content. Learning Object Metadata (LOM) was developed to achieve these goals by describing learning objects in order to provide meaningful metadata. Unfortunately, the conventional LOM lacks the…

  3. McIDAS-V: Advanced Visualization for 3D Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2010-12-01

    McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.

  4. NetCDF-CF-OPeNDAP: Standards for ocean data interoperability and object lessons for community data standards processes

    USGS Publications Warehouse

    Hankin, Steven C.; Blower, Jon D.; Carval, Thierry; Casey, Kenneth S.; Donlon, Craig; Lauret, Olivier; Loubrieu, Thomas; Srinivasan, Ashwanth; Trinanes, Joaquin; Godøy, Øystein; Mendelssohn, Roy; Signell, Richard P.; de La Beaujardiere, Jeff; Cornillon, Peter; Blanc, Frederique; Rew, Russ; Harlan, Jack; Hall, Julie; Harrison, D.E.; Stammer, Detlef

    2010-01-01

    It is generally recognized that meeting society's emerging environmental science and management needs will require the marine data community to provide simpler, more effective and more interoperable access to its data. There is broad agreement, as well, that data standards are the bedrock upon which interoperability will be built. The path that would bring the marine data community to agree upon and utilize such standards, however, is often elusive. In this paper we examine the trio of standards 1) netCDF files; 2) the Climate and Forecast (CF) metadata convention; and 3) the OPeNDAP data access protocol. These standards taken together have brought our community a high level of interoperability for "gridded" data such as model outputs, satellite products and climatological analyses, and they are gaining rapid acceptance for ocean observations. We will provide an overview of the scope of the contribution that has been made. We then step back from the information technology considerations to examine the community or "social" process by which the successes were achieved. We contrast the path by which the World Meteorological Organization (WMO) has advanced the Global Telecommunications System (GTS) - netCDF/CF/OPeNDAP exemplifying a "bottom up" standards process whereas GTS is "top down". Both of these standards are tales of success at achieving specific purposes, yet each is hampered by technical limitations. These limitations sometimes lead to controversy over whether alternative technological directions should be pursued. Finally we draw general conclusions regarding the factors that affect the success of a standards development effort - the likelihood that an IT standard will meet its design goals and will achieve community-wide acceptance. We believe that a higher level of thoughtful awareness by the scientists, program managers and technology experts of the vital role of standards and the merits of alternative standards processes can help us as a community to reach our interoperability goals faster.

  5. OntoCheck: verifying ontology naming conventions and metadata completeness in Protégé 4

    PubMed Central

    2012-01-01

    Background Although policy providers have outlined minimal metadata guidelines and naming conventions, ontologies of today still display inter- and intra-ontology heterogeneities in class labelling schemes and metadata completeness. This fact is at least partially due to missing or inappropriate tools. Software support can ease this situation and contribute to overall ontology consistency and quality by helping to enforce such conventions. Objective We provide a plugin for the Protégé Ontology editor to allow for easy checks on compliance towards ontology naming conventions and metadata completeness, as well as curation in case of found violations. Implementation In a requirement analysis, derived from a prior standardization approach carried out within the OBO Foundry, we investigate the needed capabilities for software tools to check, curate and maintain class naming conventions. A Protégé tab plugin was implemented accordingly using the Protégé 4.1 libraries. The plugin was tested on six different ontologies. Based on these test results, the plugin could be refined, also by the integration of new functionalities. Results The new Protégé plugin, OntoCheck, allows for ontology tests to be carried out on OWL ontologies. In particular the OntoCheck plugin helps to clean up an ontology with regard to lexical heterogeneity, i.e. enforcing naming conventions and metadata completeness, meeting most of the requirements outlined for such a tool. Found test violations can be corrected to foster consistency in entity naming and meta-annotation within an artefact. Once specified, check constraints like name patterns can be stored and exchanged for later re-use. Here we describe a first version of the software, illustrate its capabilities and use within running ontology development efforts and briefly outline improvements resulting from its application. Further, we discuss OntoChecks capabilities in the context of related tools and highlight potential future expansions. Conclusions The OntoCheck plugin facilitates labelling error detection and curation, contributing to lexical quality assurance in OWL ontologies. Ultimately, we hope this Protégé extension will ease ontology alignments as well as lexical post-processing of annotated data and hence can increase overall secondary data usage by humans and computers. PMID:23046606

  6. Unleashing Geophysics Data with Modern Formats and Services

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Brodie, Ross C.; Druken, Kelsey; Bastrakova, Irina; Evans, Ben; Kemp, Carina; Richardson, Murray; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    Geoscience Australia (GA) is the national steward of large volumes of geophysical data extending over the entire Australasian region and spanning many decades. The volume and variety of data which must be managed, coupled with the increasing need to support machine-to-machine data access, mean that the old "click-and-ship" model delivering data as downloadable files for local analysis is rapidly becoming unviable - a "big data" problem not unique to geophysics. The Australian Government, through the Research Data Services (RDS) Project, recently funded the Australian National Computational Infrastructure (NCI) to organize a wide range of Earth Systems data from diverse collections including geoscience, geophysics, environment, climate, weather, and water resources onto a single High Performance Data (HPD) Node. This platform, which now contains over 10 petabytes of data, is called the National Environmental Research Data Interoperability Platform (NERDIP), and is designed to facilitate broad user access, maximise reuse, and enable integration. GA has contributed several hundred terabytes of geophysical data to the NERDIP. Historically, geophysical datasets have been stored in a range of formats, with metadata of varying quality and accessibility, and without standardised vocabularies. This has made it extremely difficult to aggregate original data from multiple surveys (particularly un-gridded geophysics point/line data) into standard formats suited to High Performance Computing (HPC) environments. To address this, it was decided to use the NERDIP-preferred Hierarchical Data Format (HDF) 5, which is a proven, standard, open, self-describing and high-performance format supported by extensive software tools, libraries and data services. The Network Common Data Form (NetCDF) 4 API facilitates the use of data in HDF5, whilst the NetCDF Climate & Forecasting conventions (NetCDF-CF) further constrain NetCDF4/HDF5 data so as to provide greater inherent interoperability. The first geophysical data collection selected for transformation by GA was Airborne ElectroMagnetics (AEM) data which was held in proprietary-format files, with associated ISO 19115 metadata held in a separate relational database. Existing NetCDF-CF metadata profiles were enhanced to cover AEM and other geophysical data types, and work is underway to formalise the new geophysics vocabulary as a proposed extension to the Climate & Forecasting conventions. The richness and flexibility of HDF5's internal indexing mechanisms has allowed lossless restructuring of the AEM data for efficient storage, subsetting and access via either the NetCDF4/HDF5 APIs or Open-source Project for a Network Data Access Protocol (OPeNDAP) data services. This approach not only supports large-scale HPC processing, but also interactive access to a wide range of geophysical data in user-friendly environments such as iPython notebooks and more sophisticated cloud-enabled portals such as the Virtual Geophysics Laboratory (VGL). As multidimensional AEM datasets are relatively complex compared to other geophysical data types, the general approach employed in this project for modernizing AEM data is likely to be applicable to other geophysics data types. When combined with the use of standards-based data services and APIs, a coordinated, systematic modernisation will result in vastly improved accessibility to, and usability of, geophysical data in a wide range of computational environments both within and beyond the geophysics community.

  7. A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST

    PubMed Central

    Rock, Michael J.; Makholm, Linda; Eickhoff, Jens

    2015-01-01

    Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724

  8. Maternal cfDNA screening for Down syndrome--a cost sensitivity analysis.

    PubMed

    Cuckle, Howard; Benn, Peter; Pergament, Eugene

    2013-07-01

    This study aimed to determine the principal factors contributing to the cost of avoiding a birth with Down syndrome by using cell-free DNA (cfDNA) to replace conventional screening. A range of unit costs were assigned to each item in the screening process. Detection rates were estimated by meta-analysis and modeling. The marginal cost associated with the detection of additional cases using cfDNA was estimated from the difference in average costs divided by the difference in detection. The main factor was the unit cost of cfDNA testing. For example, replacing a combined test costing $150 with 3% false-positive rate and invasive testing at $1000, by cfDNA tests at $2000, $1500, $1000, and $500, the marginal cost is $8.0, $5.8, $3.6, and $1.4m, respectively. Costs were lower when replacing a quadruple test and higher for a 5% false-positive rate, but the relative importance of cfDNA unit cost was unchanged. A contingent policy whereby 10% to 20% women were selected for cfDNA testing by conventional screening was considerably more cost-efficient. Costs were sensitive to cfDNA uptake. Universal cfDNA screening for Down syndrome will only become affordable by public health purchasers if costs fall substantially. Until this happens, the contingent use of cfDNA is recommended. © 2013 John Wiley & Sons, Ltd.

  9. Urinary cell-free DNA is a versatile analyte for monitoring infections of the urinary tract.

    PubMed

    Burnham, Philip; Dadhania, Darshana; Heyang, Michael; Chen, Fanny; Westblade, Lars F; Suthanthiran, Manikkam; Lee, John Richard; De Vlaminck, Iwijn

    2018-06-20

    Urinary tract infections are one of the most common infections in humans. Here we tested the utility of urinary cell-free DNA (cfDNA) to comprehensively monitor host and pathogen dynamics in bacterial and viral urinary tract infections. We isolated cfDNA from 141 urine samples from a cohort of 82 kidney transplant recipients and performed next-generation sequencing. We found that urinary cfDNA is highly informative about bacterial and viral composition of the microbiome, antimicrobial susceptibility, bacterial growth dynamics, kidney allograft injury, and host response to infection. These different layers of information are accessible from a single assay and individually agree with corresponding clinical tests based on quantitative PCR, conventional bacterial culture, and urinalysis. In addition, cfDNA reveals the frequent occurrence of pathologies that remain undiagnosed with conventional diagnostic protocols. Our work identifies urinary cfDNA as a highly versatile analyte to monitor infections of the urinary tract.

  10. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    Access to Earth Observation products remains not at all straightforward for end users in most domains. Semantically-enabled search engines, generally accessible through Web portals, have been developed. They allow searching for products by selecting application-specific terms and specifying basic geographical and temporal filtering criteria. Although this mostly suits the needs of the general public, the scientific communities require more advanced and controlled means to find products. Ranges of validity, traceability (e.g. origin, applied algorithms), accuracy, uncertainty, are concepts that are typically taken into account in research activities. The Prod-Trees (Enriching Earth Observation Ontology Services using Product Trees) project will enhance the CF-netCDF product format and vocabulary to allow storing metadata that better describe the products, and in particular EO products. The project will bring a standardized solution that permits annotating EO products in such a manner that official and third-party software libraries and tools will be able to search for products using advanced tags and controlled parameter names. Annotated EO products will be automatically supported by all the compatible software. Because the entire product information will come from the annotations and the standards, there will be no need for integrating extra components and data structures that have not been standardized. In the course of the project, the most important and popular open-source software libraries and tools will be extended to support the proposed extensions of CF-netCDF. The result will be provided back to the respective owners and maintainers for ensuring the best dissemination and adoption of the extended format. The project, funded by ESA, has started in December 2012 and will end in May 2014. It is coordinated by Space Applications Services, and the Consortium includes CNR-IIA and the National and Kapodistrian University of Athens. The first activities included the elicitation of user requirements in order to identify gaps in the current CF and netCDF specification for providing an extended support of the discovery of EO data. To this aim a Validation Group has been established including members from organizations actively using netCDF and CF standards. A questionnaire has been prepared and submitted to the Validation Group; it was aimed for being filled online, but also for guiding interviews. The presentation will focus on the project objectives, the first achievements with particular reference to the results of the requirements analysis and future plans.

  11. Efficacy and toxicity profiles of two chemoradiotherapies for stage II laryngeal cancer - a comparison between late course accelerated hyperfractionation (LCAHF) and conventional fractionation (CF).

    PubMed

    Okazaki, Eiichiro; Matsushita, Naoki; Tashiro, Mari; Shimatani, Yasuhiko; Ishii, Kentaro; Hosono, Masako; Oishi, Masahiro; Teranishi, Yuichi; Iguchi, Hiroyoshi; Miki, Yukio

    2017-08-01

    To evaluate the treatment results of late course accelerated hyperfractionation (LCAHF) compared with conventional fractionation (CF) for stage II laryngeal cancer. Fifty-nine consecutive patients treated for stage II laryngeal cancer were retrospectively reviewed. Thirty-two patients underwent LCAHF, twice-daily fractions during the latter half with a total dose of 69 Gy. Twenty-seven patients received CF of 70 Gy. The local control rates (LCRs), overall survival (OS), and disease-specific survival (DSS) at 5 years were 80.6%, 74.0%, and 90.4%, respectively, after LCAHF and 64.7%, 68.2%, and 90.5%, respectively, after CF. There were no significant differences in LCR, OS, and DSS (p = .11, 0.68, and 0.69, respectively). In a small number of patients with supraglottic cancer, LCAHF was associated with a significantly higher LCR at 5 years compared with CF (100% vs. 41.7%; p = .02). This is the first report that compared the results of LCAHF and CF for stage II laryngeal cancer. We could not find significant differences in LCR, DSS, and OS rates between LCAHF and CF groups. Although in a small number of patients with supraglottic cancer, LCAHF may improve the LCR compared with CF.

  12. JADDS - towards a tailored global atmospheric composition data service for CAMS forecasts and reanalysis

    NASA Astrophysics Data System (ADS)

    Stein, Olaf; Schultz, Martin G.; Rambadt, Michael; Saini, Rajveer; Hoffmann, Lars; Mallmann, Daniel

    2017-04-01

    Global model data of atmospheric composition produced by the Copernicus Atmospheric Monitoring Service (CAMS) is collected since 2010 at FZ Jülich and serves as boundary condition for use by Regional Air Quality (RAQ) modellers world-wide. RAQ models need time-resolved meteorological as well as chemical lateral boundary conditions for their individual model domains. While the meteorological data usually come from well-established global forecast systems, the chemical boundary conditions are not always well defined. In the past, many models used 'climatic' boundary conditions for the tracer concentrations, which can lead to significant concentration biases, particularly for tracers with longer lifetimes which can be transported over long distances (e.g. over the whole northern hemisphere) with the mean wind. The Copernicus approach utilizes extensive near-realtime data assimilation of atmospheric composition data observed from space which gives additional reliability to the global modelling data and is well received by the RAQ communities. An existing Web Coverage Service (WCS) for sharing these individually tailored model results is currently being re-engineered to make use of a modern, scalable database technology in order to improve performance, enhance flexibility, and allow the operation of catalogue services. The new Jülich Atmospheric Data Distributions Server (JADDS) adheres to the Web Coverage Service WCS2.0 standard as defined by the Open Geospatial Consortium OGC. This enables the user groups to flexibly define datasets they need by selecting a subset of chemical species or restricting geographical boundaries or the length of the time series. The data is made available in the form of different catalogues stored locally on our server. In addition, the Jülich OWS Interface (JOIN) provides interoperable web services allowing for easy download and visualization of datasets delivered from WCS servers via the internet. We will present the prototype JADDS server and address the major issues identified when relocating large four-dimensional datasets into a RASDAMAN raster array database. So far the RASDAMAN support for data available in netCDF format is limited with respect to metadata related to variables and axes. For community-wide accepted solutions, selected data coverages shall result in downloadable netCDF files including metadata complying with the netCDF CF Metadata Conventions standard (http://cfconventions.org/). This can be achieved by adding custom metadata elements for RASDAMAN bands (model levels) on data ingestion. Furthermore, an optimization strategy for ingestion of several TB of 4D model output data will be outlined.

  13. Pharmacokinetics and Pharmacodynamics of Aerosolized Antibacterial Agents in Chronically Infected Cystic Fibrosis Patients

    PubMed Central

    2014-01-01

    SUMMARY Bacteria adapt to growth in lungs of patients with cystic fibrosis (CF) by selection of heterogeneously resistant variants that are not detected by conventional susceptibility testing but are selected for rapidly during antibacterial treatment. Therefore, total bacterial counts and antibiotic susceptibilities are misleading indicators of infection and are not helpful as guides for therapy decisions or efficacy endpoints. High drug concentrations delivered by aerosol may maximize efficacy, as decreased drug susceptibilities of the pathogens are compensated for by high target site concentrations. However, reductions of the bacterial load in sputum and improvements in lung function were within the same ranges following aerosolized and conventional therapies. Furthermore, the use of conventional pharmacokinetic/pharmacodynamic (PK/PD) surrogates correlating pharmacokinetics in serum with clinical cure and presumed or proven eradication of the pathogen as a basis for PK/PD investigations in CF patients is irrelevant, as minimization of systemic exposure is one of the main objectives of aerosolized therapy; in addition, bacterial pathogens cannot be eradicated, and chronic infection cannot be cured. Consequently, conventional PK/PD surrogates are not applicable to CF patients. It is nonetheless obvious that systemic exposure of patients, with all its sequelae, is minimized and that the burden of oral treatment for CF patients suffering from chronic infections is reduced. PMID:25278574

  14. Comparison of first-tier cell-free DNA screening for common aneuploidies with conventional publically funded screening.

    PubMed

    Langlois, Sylvie; Johnson, JoAnn; Audibert, François; Gekas, Jean; Forest, Jean-Claude; Caron, André; Harrington, Keli; Pastuck, Melanie; Meddour, Hasna; Tétu, Amélie; Little, Julian; Rousseau, François

    2017-12-01

    This study evaluates the impact of offering cell-free DNA (cfDNA) screening as a first-tier test for trisomies 21 and 18. This is a prospective study of pregnant women undergoing conventional prenatal screening who were offered cfDNA screening in the first trimester with clinical outcomes obtained on all pregnancies. A total of 1198 pregnant women were recruited. The detection rate of trisomy 21 with standard screening was 83% with a false positive rate (FPR) of 5.5% compared with 100% detection and 0% FPR for cfDNA screening. The FPR of cfDNA screening for trisomies 18 and 13 was 0.09% for each. Two percent of women underwent an invasive diagnostic procedure based on screening or ultrasound findings; without the cfDNA screening, it could have been as high as 6.8%. Amongst the 640 women with negative cfDNA results and a nuchal translucency (NT) ultrasound, only 3 had an NT greater or equal to 3.5 mm: one had a normal outcome and two lost their pregnancy before 20 weeks. cfDNA screening has the potential to be a highly effective first-tier screening approach leading to a significant reduction of invasive diagnostic procedures. For women with a negative cfDNA screening result, NT measurement has limited clinical utility. © 2017 John Wiley & Sons, Ltd.

  15. A new method of sweat testing: the CF Quantum®sweat test.

    PubMed

    Rock, Michael J; Makholm, Linda; Eickhoff, Jens

    2014-09-01

    Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  16. Comparison of conventional filtering and independent component analysis for artifact reduction in simultaneous gastric EMG and magnetogastrography from porcines.

    PubMed

    Irimia, Andrei; Richards, William O; Bradshaw, L Alan

    2009-11-01

    In this study, we perform a comparative study of independent component analysis (ICA) and conventional filtering (CF) for the purpose of artifact reduction from simultaneous gastric EMG and magnetogastrography (MGG). EMG/MGG data were acquired from ten anesthetized pigs by obtaining simultaneous recordings using serosal electrodes (EMG) as well as with a superconducting quantum interference device biomagnetometer (MGG). The analysis of MGG waveforms using ICA and CF indicates that ICA is superior to the CF method in its ability to extract respiration and cardiac artifacts from MGG recordings. A signal frequency analysis of ICA- and CF-processed data was also undertaken using waterfall plots, and it was determined that the two methods produce qualitatively comparable results. Through the use of simultaneous EMG/MGG, we were able to demonstrate the accuracy and trustworthiness of our results by comparison and cross-validation within the framework of a porcine model.

  17. A randomized controlled trial of conventional fraction and late course accelerated hyperfraction three-dimensional conformal radiotherapy for esophageal cancer.

    PubMed

    Wang, Jian-Hua; Lu, Xu-Jing; Zhou, Jian; Wang, Feng

    2012-01-01

    We compared the curative and side-effects in esophageal carcinoma treated by conventional fraction (CF) and late course accelerated hyperfraction (LCAF) three-dimensional conformal radiotherapy. Ninety-eight patients were randomly assigned to two different radiotherapy model groups. Fifty patients were treated using CF three-dimensional conformal radiotherapy at a total dose of 60-68 Gy; 2 Gy/F; 5 fractions/week (median 64 Gy), 48 patients were treated with LCAF (First CF-treated at the dose 40 Gy. Later, LCAF-treated 1.5 Gy/F; 2 fractions/day; 21-27 Gy; a total dose of 61-67 Gy; median 64 Gy). The data showed that the 1-, 2- and 3-year-survival rates in LCAF group were 79.2, 56.3, and 43.8%, compared to 74, 54, and 36% in CF group (P = 0.476). The 1-, 2- and 3-year-local control rates in LCAF group were 81.3, 62.5, and 50%, compared to 78, 58, and 42% in CF group (P = 0.454). In CF group, the incidence of radiation-induced esophagitis was lower than that in LCAF group (72 vs. 93.8%; P = 0.008) and there was no significant difference between rates of radiation-induced pneumonitis in CF and LCAF groups (10 vs. 6.25%; P = 0.498). It was concluded that the 1-, 2- and 3-year-local control and survival rates of esophageal carcinoma patients treated with LCAF were slightly better than CF radiotherapy; however, the radiation side-effects in LCAF group were greater than those in CF group.

  18. Enhancement of ionization efficiency of mass spectrometric analysis from non-electrospray ionization friendly solvents with conventional and novel ionization techniques.

    PubMed

    Jiang, Ping; Lucy, Charles A

    2015-10-15

    Electrospray ionization mass spectrometry (ESI-MS) has significantly impacted the analysis of complex biological and petroleum samples. However ESI-MS has limited ionization efficiency for samples in low dielectric and low polarity solvents. Addition of a make-up solvent through a T union or electrospray solvent through continuous flow extractive desorption electrospray ionization (CF-EDESI) enable ionization of analytes in non-ESI friendly solvents. A conventional make-up solvent addition setup was used and a CF-EDESI source was built for ionization of nitrogen-containing standards in hexane or hexane/isopropanol. Factors affecting the performance of both sources have been investigated and optimized. Both the make-up solvent addition and CF-EDESI improve the ionization efficiency for heteroatom compounds in non-ESI friendly solvents. Make-up solvent addition provides higher ionization efficiency than CF-EDESI. Neither the make-up solvent addition nor the CF-EDESI eliminates ionization suppression of nitrogen-containing compounds caused by compounds of the same chemical class. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Developing Cyberinfrastructure Tools and Services for Metadata Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Mecum, B.; Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Metadata and data quality are at the core of reusable and reproducible science. While great progress has been made over the years, much of the metadata collected only addresses data discovery, covering concepts such as titles and keywords. Improving metadata beyond the discoverability plateau means documenting detailed concepts within the data such as sampling protocols, instrumentation used, and variables measured. Given that metadata commonly do not describe their data at this level, how might we improve the state of things? Giving scientists and data managers easy to use tools to evaluate metadata quality that utilize community-driven recommendations is the key to producing high-quality metadata. To achieve this goal, we created a set of cyberinfrastructure tools and services that integrate with existing metadata and data curation workflows which can be used to improve metadata and data quality across the sciences. These tools work across metadata dialects (e.g., ISO19115, FGDC, EML, etc.) and can be used to assess aspects of quality beyond what is internal to the metadata such as the congruence between the metadata and the data it describes. The system makes use of a user-friendly mechanism for expressing a suite of checks as code in popular data science programming languages such as Python and R. This reduces the burden on scientists and data managers to learn yet another language. We demonstrated these services and tools in three ways. First, we evaluated a large corpus of datasets in the DataONE federation of data repositories against a metadata recommendation modeled after existing recommendations such as the LTER best practices and the Attribute Convention for Dataset Discovery (ACDD). Second, we showed how this service can be used to display metadata and data quality information to data producers during the data submission and metadata creation process, and to data consumers through data catalog search and access tools. Third, we showed how the centrally deployed DataONE quality service can achieve major efficiency gains by allowing member repositories to customize and use recommendations that fit their specific needs without having to create de novo infrastructure at their site.

  20. NetCDF-U - Uncertainty conventions for netCDF datasets

    NASA Astrophysics Data System (ADS)

    Bigagli, Lorenzo; Nativi, Stefano; Domenico, Ben

    2013-04-01

    To facilitate the automated processing of uncertain data (e.g. uncertainty propagation in modeling applications), we have proposed a set of conventions for expressing uncertainty information within the netCDF data model and format: the NetCDF Uncertainty Conventions (NetCDF-U). From a theoretical perspective, it can be said that no dataset is a perfect representation of the reality it purports to represent. Inevitably, errors arise from the observation process, including the sensor system and subsequent processing, differences in scales of phenomena and the spatial support of the observation mechanism, lack of knowledge about the detailed conversion between the measured quantity and the target variable. This means that, in principle, all data should be treated as uncertain. The most natural representation of an uncertain quantity is in terms of random variables, with a probabilistic approach. However, it must be acknowledged that almost all existing data resources are not treated in this way. Most datasets come simply as a series of values, often without any uncertainty information. If uncertainty information is present, then it is typically within the metadata, as a data quality element. This is typically a global (dataset wide) representation of uncertainty, often derived through some form of validation process. Typically, it is a statistical measure of spread, for example the standard deviation of the residuals. The introduction of a mechanism by which such descriptions of uncertainty can be integrated into existing geospatial applications is considered a practical step towards a more accurate modeling of our uncertain understanding of any natural process. Given the generality and flexibility of the netCDF data model, conventions on naming, syntax, and semantics have been adopted by several communities of practice, as a means of improving data interoperability. Some of the existing conventions include provisions on uncertain elements and concepts, but, to our knowledge, no general convention on the encoding of uncertainty has been proposed, to date. In particular, the netCDF Climate and Forecast Conventions (NetCDF-CF), a de-facto standard for a large amount of data in Fluid Earth Sciences, mention the issue and provide limited support for uncertainty representation. NetCDF-U is designed to be fully compatible with NetCDF-CF, where possible adopting the same mechanisms (e.g. using the same attributes name with compatible semantics). The rationale for this is that a probabilistic description of scientific quantities is a crosscutting aspect, which may be modularized (note that a netCDF dataset may be compliant with more than one convention). The scope of NetCDF-U is to extend and qualify the netCDF classic data model (also known as netCDF3), to capture the uncertainty related to geospatial information encoded in that format. In the future, a netCDF4 approach for uncertainty encoding will be investigated. The NetCDF-U Conventions have the following rationale: • Compatibility with netCDF-CF Conventions 1.5. • Human-readability of conforming datasets structure. • Minimal difference between certain/agnostic and uncertain representations of data (e.g. with respect to dataset structure). NetCDF-U is based on a generic mechanism for annotating netCDF data variables with probability theory semantics. The Uncertainty Markup Language (UncertML) 2.0 is used as a controlled conceptual model and vocabulary for NetCDF-U annotations. The proposed mechanism anticipates a generalized support for semantic annotations in netCDF. NetCDF-U defines syntactical conventions for encoding samples, summary statistics, and distributions, along with mechanisms for expressing dependency relationships among variables. The conventions were accepted as an Open Geospatial Consortium (OGC) Discussion Paper (OGC 11-163); related discussions are conducted on a public forum hosted by the OGC. NetCDF-U may have implications for future work directed at communicating geospatial data provenance and uncertainty in contexts other than netCDF. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.

  1. Mutation-based detection and monitoring of cell-free tumor DNA in peripheral blood of cancer patients.

    PubMed

    Benesova, L; Belsanova, B; Suchanek, S; Kopeckova, M; Minarikova, P; Lipska, L; Levy, M; Visokai, V; Zavoral, M; Minarik, M

    2013-02-15

    Prognosis of solid cancers is generally more favorable if the disease is treated early and efficiently. A key to long cancer survival is in radical surgical therapy directed at the primary tumor followed by early detection of possible progression, with swift application of subsequent therapeutic intervention reducing the risk of disease generalization. The conventional follow-up care is based on regular observation of tumor markers in combination with computed tomography/endoscopic ultrasound/magnetic resonance/positron emission tomography imaging to monitor potential tumor progression. A recent development in methodologies allowing screening for a presence of cell-free DNA (cfDNA) brings a new viable tool in early detection and management of major cancers. It is believed that cfDNA is released from tumors primarily due to necrotization, whereas the origin of nontumorous cfDNA is mostly apoptotic. The process of cfDNA detection starts with proper collection and treatment of blood and isolation and storage of blood plasma. The next important steps include cfDNA extraction from plasma and its detection and/or quantification. To distinguish tumor cfDNA from nontumorous cfDNA, specific somatic DNA mutations, previously localized in the primary tumor tissue, are identified in the extracted cfDNA. Apart from conventional mutation detection approaches, several dedicated techniques have been presented to detect low levels of cfDNA in an excess of nontumorous (nonmutated) DNA, including real-time polymerase chain reaction (PCR), "BEAMing" (beads, emulsion, amplification, and magnetics), and denaturing capillary electrophoresis. Techniques to facilitate the mutant detection, such as mutant-enriched PCR and COLD-PCR (coamplification at lower denaturation temperature PCR), are also applicable. Finally, a number of newly developed miniaturized approaches, such as single-molecule sequencing, are promising for the future. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. USGIN ISO metadata profile

    NASA Astrophysics Data System (ADS)

    Richard, S. M.

    2011-12-01

    The USGIN project has drafted and is using a specification for use of ISO 19115/19/39 metadata, recommendations for simple metadata content, and a proposal for a URI scheme to identify resources using resolvable http URI's(see http://lab.usgin.org/usgin-profiles). The principal target use case is a catalog in which resources can be registered and described by data providers for discovery by users. We are currently using the ESRI Geoportal (Open Source), with configuration files for the USGIN profile. The metadata offered by the catalog must provide sufficient content to guide search engines to locate requested resources, to describe the resource content, provenance, and quality so users can determine if the resource will serve for intended usage, and finally to enable human users and sofware clients to obtain or access the resource. In order to achieve an operational federated catalog system, provisions in the ISO specification must be restricted and usage clarified to reduce the heterogeneity of 'standard' metadata and service implementations such that a single client can search against different catalogs, and the metadata returned by catalogs can be parsed reliably to locate required information. Usage of the complex ISO 19139 XML schema allows for a great deal of structured metadata content, but the heterogenity in approaches to content encoding has hampered development of sophisticated client software that can take advantage of the rich metadata; the lack of such clients in turn reduces motivation for metadata producers to produce content-rich metadata. If the only significant use of the detailed, structured metadata is to format into text for people to read, then the detailed information could be put in free text elements and be just as useful. In order for complex metadata encoding and content to be useful, there must be clear and unambiguous conventions on the encoding that are utilized by the community that wishes to take advantage of advanced metadata content. The use cases for the detailed content must be well understood, and the degree of metadata complexity should be determined by requirements for those use cases. The ISO standard provides sufficient flexibility that relatively simple metadata records can be created that will serve for text-indexed search/discovery, resource evaluation by a user reading text content from the metadata, and access to the resource via http, ftp, or well-known service protocols (e.g. Thredds; OGC WMS, WFS, WCS).

  3. ODISEES: A New Paradigm in Data Access

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Little, M. M.; Kusterer, J.

    2013-12-01

    As part of its ongoing efforts to improve access to data, the Atmospheric Science Data Center has developed a high-precision Earth Science domain ontology (the 'ES Ontology') implemented in a graph database ('the Semantic Metadata Repository') that is used to store detailed, semantically-enhanced, parameter-level metadata for ASDC data products. The ES Ontology provides the semantic infrastructure needed to drive the ASDC's Ontology-Driven Interactive Search Environment for Earth Science ('ODISEES'), a data discovery and access tool, and will support additional data services such as analytics and visualization. The ES ontology is designed on the premise that naming conventions alone are not adequate to provide the information needed by prospective data consumers to assess the suitability of a given dataset for their research requirements; nor are current metadata conventions adequate to support seamless machine-to-machine interactions between file servers and end-user applications. Data consumers need information not only about what two data elements have in common, but also about how they are different. End-user applications need consistent, detailed metadata to support real-time data interoperability. The ES ontology is a highly precise, bottom-up, queriable model of the Earth Science domain that focuses on critical details about the measurable phenomena, instrument techniques, data processing methods, and data file structures. Earth Science parameters are described in detail in the ES Ontology and mapped to the corresponding variables that occur in ASDC datasets. Variables are in turn mapped to well-annotated representations of the datasets that they occur in, the instrument(s) used to create them, the instrument platforms, the processing methods, etc., creating a linked-data structure that allows both human and machine users to access a wealth of information critical to understanding and manipulating the data. The mappings are recorded in the Semantic Metadata Repository as RDF-triples. An off-the-shelf Ontology Development Environment and a custom Metadata Conversion Tool comprise a human-machine/machine-machine hybrid tool that partially automates the creation of metadata as RDF-triples by interfacing with existing metadata repositories and providing a user interface that solicits input from a human user, when needed. RDF-triples are pushed to the Ontology Development Environment, where a reasoning engine executes a series of inference rules whose antecedent conditions can be satisfied by the initial set of RDF-triples, thereby generating the additional detailed metadata that is missing in existing repositories. A SPARQL Endpoint, a web-based query service and a Graphical User Interface allow prospective data consumers - even those with no familiarity with NASA data products - to search the metadata repository to find and order data products that meet their exact specifications. A web-based API will provide an interface for machine-to-machine transactions.

  4. Improving a Complement-fixation Test for Equine Herpesvirus Type-1 by Pretreating Sera with Potassium Periodate to Reduce Non-specific Hemolysis

    PubMed Central

    BANNAI, Hiroshi; NEMOTO, Manabu; TSUJIMURA, Koji; YAMANAKA, Takashi; KONDO, Takashi; MATSUMURA, Tomio

    2013-01-01

    Non-specific hemolysis has often been observed during complement-fixation (CF) tests for equine herpesvirus type-1 (EHV-1), even when the sera have virus-specific CF antibodies. This phenomenon has also been reported in CF tests for various infectious diseases of swine. We found that the sera from 22 of 85 field horses (25.9%) showed non-specific hemolysis during conventional CF testing for EHV-1. Because pretreatment of swine sera with potassium periodate (KIO4) improves the CF test for swine influenza, we applied this method to horse sera. As we expected, horse sera treated with KIO4 did not show non-specific hemolysis in the EHV-1 CF test, and precise determination of titers was achieved. PMID:24834005

  5. Cloud-Enabled Climate Analytics-as-a-Service using Reanalysis data: A case study.

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Duffy, D.; Schnase, J. L.; McInerney, M.; Tamkin, G.; Potter, G. L.; Thompson, J. H.

    2014-12-01

    The NASA Center for Climate Simulation (NCCS) maintains advanced data capabilities and facilities that allow researchers to access the enormous volume of data generated by weather and climate models. The NASA Climate Model Data Service (CDS) and the NCCS are merging their efforts to provide Climate Analytics-as-a-Service for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to netCDF4 file format following the CMIP5 Climate and Forecast (CF) metadata convention prior to be sequenced into the Hadoop Distributed File System ( HDFS ). A small set of operations that represent a common starting point in many analysis workflows was then created: min, max, sum, count, variance and average. In this example, Reanalysis data exploration was performed with the use of Hadoop MapReduce and accessibility was achieved using the Climate Data Service(CDS) application programming interface (API) created at NCCS. This API provides a uniform treatment of large amount of data. In this case study, we have limited our exploration to 2 variables, temperature and precipitation, using 3 operations, min, max and avg and using 30-year of Reanalysis data for 3 regions of the world: global, polar, subtropical.

  6. Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases

    PubMed Central

    Dinu, Valentin; Nadkarni, Prakash

    2007-01-01

    Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467

  7. The XML Metadata Editor of GFZ Data Services

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Elger, Kirsten; Tesei, Telemaco; Trippanera, Daniele

    2017-04-01

    Following the FAIR data principles, research data should be Findable, Accessible, Interoperable and Reuseable. Publishing data under these principles requires to assign persistent identifiers to the data and to generate rich machine-actionable metadata. To increase the interoperability, metadata should include shared vocabularies and crosslink the newly published (meta)data and related material. However, structured metadata formats tend to be complex and are not intended to be generated by individual scientists. Software solutions are needed that support scientists in providing metadata describing their data. To facilitate data publication activities of 'GFZ Data Services', we programmed an XML metadata editor that assists scientists to create metadata in different schemata popular in the earth sciences (ISO19115, DIF, DataCite), while being at the same time usable by and understandable for scientists. Emphasis is placed on removing barriers, in particular the editor is publicly available on the internet without registration [1] and the scientists are not requested to provide information that may be generated automatically (e.g. the URL of a specific licence or the contact information of the metadata distributor). Metadata are stored in browser cookies and a copy can be saved to the local hard disk. To improve usability, form fields are translated into the scientific language, e.g. 'creators' of the DataCite schema are called 'authors'. To assist filling in the form, we make use of drop down menus for small vocabulary lists and offer a search facility for large thesauri. Explanations to form fields and definitions of vocabulary terms are provided in pop-up windows and a full documentation is available for download via the help menu. In addition, multiple geospatial references can be entered via an interactive mapping tool, which helps to minimize problems with different conventions to provide latitudes and longitudes. Currently, we are extending the metadata editor to be reused to generate metadata for data discovery and contextual metadata developed by the 'Multi-scale Laboratories' Thematic Core Service of the European Plate Observing System (EPOS-IP). The Editor will be used to build a common repository of a large variety of geological and geophysical datasets produced by multidisciplinary laboratories throughout Europe, thus contributing to a significant step toward the integration and accessibility of earth science data. This presentation will introduce the metadata editor and show the adjustments made for EPOS-IP. [1] http://dataservices.gfz-potsdam.de/panmetaworks/metaedit

  8. OntoFire: an ontology-based geo-portal for wildfires

    NASA Astrophysics Data System (ADS)

    Kalabokidis, K.; Athanasis, N.; Vaitis, M.

    2011-12-01

    With the proliferation of the geospatial technologies on the Internet, the role of geo-portals (i.e. gateways to Spatial Data Infrastructures) in the area of wildfires management emerges. However, keyword-based techniques often frustrate users when looking for data of interest in geo-portal environments, while little attention has been paid to shift from the conventional keyword-based to navigation-based mechanisms. The presented OntoFire system is an ontology-based geo-portal about wildfires. Through the proposed navigation mechanisms, the relationships between the data can be discovered, which would otherwise not be possible when using conventional querying techniques alone. End users can use the browsing interface to find resources of interest by using the navigation mechanisms provided. Data providers can use the publishing interface to submit new metadata, modify metadata or removing metadata in/from the catalogue. The proposed approach can improve the discovery of valuable information that is necessary to set priorities for disaster mitigation and prevention strategies. OntoFire aspires to be a focal point of integration and management of a very large amount of information, contributing in this way to the dissemination of knowledge and to the preparedness of the operational stakeholders.

  9. Control vocabulary software designed for CMIP6

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.

    2016-12-01

    The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.

  10. Best Practices for International Collaboration and Applications of Interoperability within a NASA Data Center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Armstrong, E. M.; Tauer, E.; Hausman, J.; Huang, T.; Thompson, C. K.; Chung, N.

    2013-12-01

    The Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is one of 12 data centers sponsored by NASA's Earth Science Data and Information System (ESDIS) project. The PO.DAAC is tasked with archival and distribution of NASA Earth science missions specific to physical oceanography, many of which have interdisciplinary applications for weather forecasting/monitoring, ocean biology, ocean modeling, and climate studies. PO.DAAC has a 20-year history of cross-project and international collaborations with partners in Europe, Japan, Australia, and the UK. Domestically, the PO.DAAC has successfully established lasting partners with non-NASA institutions and projects including the National Oceanic and Atmospheric Administration (NOAA), United States Navy, Remote Sensing Systems, and Unidata. A key component of these partnerships is PO.DAAC's direct involvement with international working groups and science teams, such as the Group for High Resolution Sea Surface Temperature (GHRSST), International Ocean Vector Winds Science Team (IOVWST), Ocean Surface Topography Science Team (OSTST), and the Committee on Earth Observing Satellites (CEOS). To help bolster new and existing collaborations, the PO.DAAC has established a standardized approach to its internal Data Management and Archiving System (DMAS), utilizing a Data Dictionary to provide the baseline standard for entry and capture of dataset and granule metadata. Furthermore, the PO.DAAC has established an end-to-end Dataset Lifecycle Policy, built upon both internal and external recommendations of best practices toward data stewardship. Together, DMAS, the Data Dictionary, and the Dataset Lifecycle Policy provide the infrastructure to enable standardized data and metadata to be fully ingested and harvested to facilitate interoperability and compatibility across data access protocols, tools, and services. The Dataset Lifecycle Policy provides the checks and balances to help ensure all incoming HDF and netCDF-based datasets meet minimum compliance requirements with the Lawrence Livermore National Laboratory's actively maintained Climate and Forecast (CF) conventions with additional goals toward metadata standards provided by the Attribute Convention for Dataset Discovery (ACDD), the International Organization for Standardization (ISO) 19100-series, and the Federal Geographic Data Committee (FGDC). By default, DMAS ensures all datasets are compliant with NASA's Global Change Master Directory (GCMD) and NASA's Reverb data discovery clearinghouse (also known as ECHO). For data access, PO.DAAC offers several widely-used technologies, including File Transfer Protocol (FTP), Open-source Project for a Network Data Access Protocol (OPeNDAP), and Thematic Realtime Environmental Distributed Data Services (THREDDS). These access technologies are available directly to users or through PO.DAAC's web interfaces, specifically the High-level Tool for Interactive Data Extraction (HiTIDE), Live Access Server (LAS), and PO.DAAC's set of search, image, and Consolidated Web Services (CWS). Lastly, PO.DAAC's newly introduced, standards-based CWS provide singular endpoints for search, imaging, and extraction capabilities, respectively, across L2/L3/L4 datasets. Altogether, these tools, services and policies serve to provide flexible, interoperable functionality for both users and data providers.

  11. Evaluation of biodegradable plastics for rubber seedling applications

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd Khairulniza; Dayang Habibah A. I., H.; Kamal, Mazlina Mustafa

    2015-08-01

    The main negative consequence of conventional plastics in agriculture is related to handling the wastes plasticand the associated environmental impact. Hence, a study of different types of potentially biodegradable plastics used for nursery applications have been evaluated on its mechanical,water absorption propertiesand Fourier transform infra-red (FTIR) spectroscopy. Supplied samples from different companies were designated as SF, CF and CO. Most of the polybags exhibited mechanical properties quite similar to the conventional plastics (polybag LDPE). CO polybag which is based on PVA however had extensively higher tensile strength and water absorption properties. FTIR study revealed a characteristics absorbance of conventional plastic, SF, CF and CO biodegradable polybag are associated with polyethylene, poly(butylene adipate-co-terephthalate) (PBAT), polyethylene and polyvinyl alcohol (PVA) structures respectively.

  12. Contact-force guided single-catheter approach for pulmonary vein isolation: Feasibility, outcomes, and cost-effectiveness.

    PubMed

    Pambrun, Thomas; Combes, Stéphane; Sousa, Pedro; Bloa, Mathieu Le; El Bouazzaoui, Rim; Grand-Larrieu, Delphine; Thompson, Nathaniel; Martin, Ruairidh; Combes, Nicolas; Boveda, Serge; Haïssaguerre, Michel; Albenque, Jean-Paul

    2017-03-01

    For conventional ablation of paroxysmal atrial fibrillation (AF), an ablation catheter in conjunction with a circular mapping catheter (CMC) is typically used for pulmonary vein isolation (PVI). The purpose of this study was to evaluate an approach for PVI with a single contact-force (CF) ablation catheter in terms of procedural reliability, outcomes, and cost-effectiveness. One hundred consecutive patients with paroxysmal AF were included in the study. Fifty patients (study group) underwent a CF-guided single-catheter approach, whereby PVI was demonstrated when sequential pacing at 9 equidistant points within the lesion set (carina included) failed to capture the left atrium. For confirmation, PVI was verified with a CMC. In comparison, 50 patients (control group) underwent a conventional PVI ablation guided by a CMC. Procedure time (101 ± 17 minutes vs 107 ± 15 minutes, P = .11), ablation time (24.2 ± 7.1 minutes vs 22.6 ± 8.8 minutes, P = .37), fluoroscopy time (5.6 ± 2.2 minutes vs 8.3 ± 3.4 minutes, P = .09), and applied CF (17.8 ± 2.6 g vs 18 ± 2.8 g, P = .72) did not reach statistical difference between the study and control groups. CF-guided single-catheter ablation achieved successful PVI in 98% of the study group and a 31% reduction in cost. At 1-year follow-up, sinus rhythm maintenance rate was similar in both groups (86% vs 84%, P = .78). In paroxysmal AF, a CF-guided single-catheter technique is an effective method for PVI, yielding substantial cost savings and clinical results similar to a conventional approach. Copyright © 2016 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  13. Conventional, Bayesian, and Modified Prony's methods for characterizing fast and slow waves in equine cancellous bone

    PubMed Central

    Groopman, Amber M.; Katz, Jonathan I.; Holland, Mark R.; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A.; Miller, James G.

    2015-01-01

    Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable. PMID:26328678

  14. The PDS4 Data Dictionary Tool - Metadata Design for Data Preparers

    NASA Astrophysics Data System (ADS)

    Raugh, A.; Hughes, J. S.

    2017-12-01

    One of the major design goals of the PDS4 development effort was to create an extendable Information Model (IM) for the archive, and to allow mission data designers/preparers to create extensions for metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity in the data itself, it is in the best interests of the PDS archive and its users that all extensions to the IM follow the same design techniques, conventions, and restrictions as the core implementation itself. But it is unrealistic to expect mission data designers to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy in order to define their own metadata. To bridge that expertise gap and bring the power of information modeling to the data label designer, the PDS Engineering Node has developed the data dictionary creation tool known as "LDDTool". This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his extension to the IM using the same, standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define context-specific validation rules. We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  15. [Effect of the same amount of faba bean fresh straw returning with different ratios of chemi- cal fertilizer on single cropping late rice].

    PubMed

    Wang, Jian-hong; Zhang, Xian; Cao, Kai; Hua, Jin-wei

    2015-05-01

    A field experiment was conducted on paddy soil derived from alluvial materials at Bihu Town, Lishui City, Zhejiang Province, China to explore the effects of combined application of faba bean fresh straw and different-rate chemical fertilizer on nutrient uptake, nutrient use efficiencies, and yields of single cropping late rice and to determine the optimal rate of chemical fertilizer under the condition of application of faba bean fresh straw at the rate of 15 t · hm(-2) (GM15) in 2012, April to December. The experiments consisted of 7 treatments: CK (no fertilizers) , CF (conventional chemical fertilizer rate) , and combined application of 15 t · hm(-2) of faba bean fresh straw and 0%, 20%, 40%, 60% and 80% of the conventional chemical fertilizer rate. The results showed that the highest total uptake amounts of N, P and K by the aboveground part were obtained from the treatments of GM15 + 60%CF and GM15 + 80% CF, but the highest nutrient agronomy use efficiencies of N, P and K in rice grains were obtained from the treatments of GM15 + 60% CF and GM15 + 40% CF. The agronomy use efficiencies and physiological use efficiencies of N, P, and K were significantly correlated with rice grain yields, thus they could be used for accurate comprehensive evaluation of fertilizer efficiencies of N, P, and K. Compared with no fertilizer treatment, the treatments of 100% CF and combined application of faba bean fresh straw and different-rate chemical fertilizer increased rice gain yields by 25.0% and 6.1%-29.2%, respectively. In the cropping system of faba bean-single cropping late rice, returning of 15 t · hm2 faba bean fresh straw to the paddy field did not result in the runt seedling of rice. From the point of improving fertilizer use efficiency and reducing environmental risk perspective, the optimum rate of chemical fertilizer was 60% of the conventional chemical fertilizer rate when 15 t · h(-2) of faba bean fresh straw was applied.

  16. Omics Metadata Management Software (OMMS).

    PubMed

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. The OMMS can be obtained at http://omms.sandia.gov.

  17. Omics Metadata Management Software (OMMS)

    PubMed Central

    Perez-Arriaga, Martha O; Wilson, Susan; Williams, Kelly P; Schoeniger, Joseph; Waymire, Russel L; Powell, Amy Jo

    2015-01-01

    Next-generation sequencing projects have underappreciated information management tasks requiring detailed attention to specimen curation, nucleic acid sample preparation and sequence production methods required for downstream data processing, comparison, interpretation, sharing and reuse. The few existing metadata management tools for genome-based studies provide weak curatorial frameworks for experimentalists to store and manage idiosyncratic, project-specific information, typically offering no automation supporting unified naming and numbering conventions for sequencing production environments that routinely deal with hundreds, if not thousands of samples at a time. Moreover, existing tools are not readily interfaced with bioinformatics executables, (e.g., BLAST, Bowtie2, custom pipelines). Our application, the Omics Metadata Management Software (OMMS), answers both needs, empowering experimentalists to generate intuitive, consistent metadata, and perform analyses and information management tasks via an intuitive web-based interface. Several use cases with short-read sequence datasets are provided to validate installation and integrated function, and suggest possible methodological road maps for prospective users. Provided examples highlight possible OMMS workflows for metadata curation, multistep analyses, and results management and downloading. The OMMS can be implemented as a stand alone-package for individual laboratories, or can be configured for webbased deployment supporting geographically-dispersed projects. The OMMS was developed using an open-source software base, is flexible, extensible and easily installed and executed. The OMMS can be obtained at http://omms.sandia.gov. Availability The OMMS can be obtained at http://omms.sandia.gov PMID:26124554

  18. Long-term mortality from cardiac causes after adjuvant hypofractionated vs. conventional radiotherapy for localized left-sided breast cancer.

    PubMed

    Chan, Elisa K; Woods, Ryan; Virani, Sean; Speers, Caroline; Wai, Elaine S; Nichol, Alan; McBride, Mary L; Tyldesley, Scott

    2015-01-01

    Ongoing concern remains regarding cardiac injury with hypofractionated whole breast/chest-wall radiotherapy (HF-WBI) compared to conventional radiotherapy (CF-WBI) in left-sided breast cancer patients. The purpose was to determine if cardiac mortality increases with HF-WBI relative to CF-WBI. Between 1990 and 1998, 5334 women with early-stage breast cancer received post-operative radiotherapy to the breast/chest wall alone. A population-based database recorded baseline patient, tumor and treatment factors. Baseline cardiovascular risk factors were identified from hospital administrative records. A propensity-score model balanced risk factors between radiotherapy groups. Cause of death was coded as breast cancer, cardiac or other cause. Cumulative mortality from each cause after radiotherapy was estimated using a competing risk approach. For left-sided cases, median follow-up was 14.2 years. 485 women received CF-WBI, 2221 women received HF-WBI. There was no difference in 15-year mortality from cardiac causes: 4.8% with HF-WBI and 4.2% with CF-WBI (p=0.74), even after propensity-score adjustment (p=0.45). There was no difference in breast cancer mortality or other cause mortality. For right-sided cases, there was no difference in mortality for the three causes of death. At 15-years follow-up, cardiac mortality is not statistically different among left-sided breast cancer patients treated with HF-WBI or CF-WBI. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Production of bovine cloned embryos with donor cells frozen at a slow cooling rate in a conventional freezer (20 C)

    USGS Publications Warehouse

    Chacon, L.; Gomez, M.C.; Jenkins, J.A.; Leibo, S.P.; Wirtu, G.; Dresser, B.L.; Pope, C.E.

    2009-01-01

    Summary Usually, fibroblasts are frozen in dimethyl sulphoxide (DMSO, 10% v/v) at a cooling rate of 1 C/min in a low-temperature (80 C) freezer (LTF) before storage in liquid nitrogen (LN2); however, a LTF is not always available. The purpose of the present study was to evaluate apoptosis and viability of bovine fibroblasts frozen in a LTF or conventional freezer (CF; 20 C) and their subsequent ability for development to blastocyst stage after fusion with enucleated bovine oocytes. Percentages of live cells frozen in LTF (49.5%) and CF (50.6%) were similar, but significantly less than non-frozen control (88%). In both CF and LTF, percentages of live apoptotic cells exposed to LN2 after freezing were lower (4% and 5%, respectively) as compared with unexposed cells (10% and 18%, respectively). Cells frozen in a CF had fewer cell doublings/24 h (0.45) and required more days (9.1) to reach 100% confluence at the first passage (P) after thawing and plating as compared with cells frozen in a LTF (0.96 and 4.0 days, respectively). Hypoploidy at P12 was higher than at P4 in cells frozen in either a CF (37.5% vs. 19.2%) or in a LTF (30.0% vs. 15.4%). A second-generation cryo-solution reduced the incidence of necrosis (29.4%) at 0 h after thawing as compared with that of a first generation cryo-solution (DMEM + DMSO, 60.2%). The percentage of apoptosis in live cells was affected by cooling rate (CF = 1.9% vs. LFT = 0.7%). Development of bovine cloned embryos to the blastocyst stage was not affected by cooling rate or freezer type. ?? 2009 Cambridge University Press.

  20. The applicability of PEEK-based abutment screws.

    PubMed

    Schwitalla, Andreas Dominik; Abou-Emara, Mohamed; Zimmermann, Tycho; Spintig, Tobias; Beuer, Florian; Lackmann, Justus; Müller, Wolf-Dieter

    2016-10-01

    The high-performance polymer PEEK (poly-ether-ether-ketone) is more and more being used in the field of dentistry, mainly for removable and fixed prostheses. In cases of screw-retained implant-supported reconstructions of PEEK, an abutment screw made of PEEK might be advantageous over a conventional metal screw due to its similar elasticity. Also in case of abutment screw fracture, a screw of PEEK could be removed more easily. M1.6-abutment screws of four different PEEK compounds were subjected to tensile tests to set their maximum tensile strengths in relation to an equivalent stress of 186MPa, which is aused by a tightening torque of 15Ncm. Two screw types were manufactured via injection molding and contained 15% short carbon fibers (sCF-15) and 40% (sCF-40), respectively. Two screw types were manufactured via milling and contained 20% TiO2 powder (TiO2-20) and >50% parallel orientated, continuous carbon fibers (cCF-50). A conventional abutments screw of Ti6Al4V (Ti; CAMLOG(®) abutment screw, CAMLOG, Wimsheim, Germany) served as control. The maximum tensile strength was 76.08±5.50MPa for TiO2-20, 152.67±15.83MPa for sCF-15, 157.29±20.11MPa for sCF-40 and 191.69±36.33MPa for cCF-50. The maximum tensile strength of the Ti-screws amounted 1196.29±21.4MPa. The results of the TiO2-20 and the Ti screws were significantly different from the results of the other samples, respectively. For the manufacturing of PEEK abutment screws, PEEK reinforced by >50% continuous carbon fibers would be the material of choice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Constrictive Bronchiolitis in Cystic Fibrosis Adolescents with Refractory Pulmonary Decline.

    PubMed

    Harris, William T; Boyd, J Todd; McPhail, Gary L; Brody, Alan S; Szczesniak, Rhonda D; Korbee, Leslie L; Baker, Michael L; Clancy, John P

    2016-12-01

    Refractory lung function decline in association with recurrent pulmonary exacerbations is a common, yet poorly explained finding in cystic fibrosis (CF). To investigate the histopathologic mechanisms of pulmonary deterioration during adolescence and early adulthood, we reviewed clinically-indicated lung biopsy specimens obtained during a period of persistent decline. To determine if peribronchiolar remodeling is prominent in lung biopsy specimens obtained in adolescents with CF refractory to conventional therapy. Six adolescents with CF (mean age, 16.2 y; mean FEV 1 , 52% predicted at biopsy) with significant pulmonary deterioration over 12-24 months (mean FEV 1 decline of 14% predicted/year) despite aggressive intervention underwent computed tomography imaging and ultimately lung biopsy to aid clinical management. In addition to routine clinical evaluation, histopathologic investigation included staining for transforming growth factor-β (TGF-β, a genetic modifier of CF lung disease), collagen deposition (a marker of fibrosis), elastin (to evaluate for bronchiectasis), and α-smooth muscle actin (to identify myofibroblasts). All computed tomography scans demonstrated a mix of bronchiectasis and hyperinflation that was variable across lung regions and within patients. Lung biopsy revealed significant peribronchiolar remodeling, particularly in patients with more advanced disease, with near complete obliteration of the peribronchiolar lumen (constrictive bronchiolitis). Myofibroblast differentiation (a TGF-β-dependent process) was prominent in specimens with significant airway remodeling. Constrictive bronchiolitis is widely present in the lung tissue of adolescents with CF with advanced disease and may contribute to impaired lung function that is refractory to conventional therapy (antibiotics, antiinflammatories, and mucolytics). TGF-β-dependent myofibroblast differentiation is prominent in areas of active fibrogenesis and may foster small airway remodeling in CF lung disease.

  2. Multifunctional Hybrid Carbon Nanotube/Carbon Fiber Polymer Composites

    NASA Technical Reports Server (NTRS)

    Kang, Jin Ho; Cano, Roberto J.; Ratcliffe, James G.; Luong, Hoa; Grimsley, Brian W.; Siochi, Emilie J.

    2016-01-01

    For aircraft primary structures, carbon fiber reinforced polymer (CFRP) composites possess many advantages over conventional aluminum alloys due to their light weight, higher strengthand stiffness-to-weight ratio, and low life-cycle maintenance costs. However, the relatively low electrical and thermal conductivities of CFRP composites fail to provide structural safety in certain operational conditions such as lightning strikes. Despite several attempts to solve these issues with the addition of carbon nanotubes (CNT) into polymer matrices, and/or by interleaving CNT sheets between conventional carbon fiber (CF) composite layers, there are still interfacial problems that exist between CNTs (or CF) and the resin. In this study, hybrid CNT/CF polymer composites were fabricated by interleaving layers of CNT sheets with Hexcel® IM7/8852 prepreg. Resin concentrations from 1 wt% to 50 wt% were used to infuse the CNT sheets prior to composite fabrication. The interlaminar properties of the resulting hybrid composites were characterized by mode I and II fracture toughness testing (double cantilever beam and end-notched flexure test). Fractographical analysis was performed to study the effect of resin concentration. In addition, multi-directional physical properties like thermal conductivity of the orthotropic hybrid polymer composite were evaluated. Interleaving CNT sheets significantly improved the in-plane (axial and perpendicular direction of CF alignment) thermal conductivity of the hybrid composite laminates by 50 - 400%.

  3. Rescuing Seasat-A from 1980

    NASA Astrophysics Data System (ADS)

    Hausman, J.; Sanchez, A.; Armstrong, E. M.

    2014-12-01

    Seasat-A was NASA's first ocean observing satellite mission. It launched in June 1978 and operated continuously until it suffered a power failure 106 days later. It contained an altimeter (ALT), scatterometer (SASS), SAR, microwave radiometer (SMMR), and a visible/infrared radiometer (VIRR). These instruments allowed Seasat to measure sea surface height, ocean winds and both brightness and sea surface temperatures. The data, except for the SAR, are archived at PO.DAAC. Since these are the only oceanographic satellite data available for this early period of remote sensing, their importance has grown for use in climate studies. Even though the datasets were digitized from the original tapes, the Seasat data have since still been maintained in the same flat binary format technology of 1980 when the data were first distributed. In 2013 PO.DAAC began a project to reformat the original data into a user friendly, modern and maintainable format consistent with the netCDF data model and Climate Forecast (CF) and Attribute Conventions Dataset Discovery (ACDD) metadata standards. A significant benefit of using this data format includes the improved interoperability with tools and web services such as OPeNDAP, THREDDS, and various subsetting software, such as PO.DAAC's HiTIDE. Additionally, application of such metadata standards provides an opportunity to correctly document the data at the granule level. The first step in the conversion process involved going through the original documentation to understand the source binary data format. Documentation was found for processing levels 1 and 2 for ALT, SASS and SMMR. Software readers were then written for each of the datasets using Matlab , followed by regression tests performed on the newly outputted data in order to demonstrate that the readers were correctly interpreting the source data. Next, writers were created to convert the data into the updated format. The reformatted data were also regression tested and science validated to ensure that the data were not corrupted during the reformatting process. The resulting modernized Seasat datasets will be made available iteratively by instrument and processing level on PO.DAAC's web portal http://podaac.jpl.nasa.gov, anonymous ftp site, ftp://podaac.jpl.nasa.gov/allData/seasat and other web services.

  4. Rate contants for CF{sub 3} + H{sub 2} {yields} CF{sub 3}H + H and CF{sub 3}H + H {yields} CF{sub 3} + H{sub 2} reactions in the temperature range 1100-1600 K.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hranisavljevic, J.; Michael, V.; Chemistry

    1998-09-24

    The shock tube technique coupled with H-atom atomic resonance absorption spectrometry has been used to study the reactions (1) CF{sub 3} + H{sub 2} {yields} CF{sub 3}H + H and (2) CF{sub 3}H + H{yields} CF{sub 3} + H{sub 2} over the temperature ranges 1168-1673 K and 1111-1550 K, respectively. The results can be represented by the Arrhenius expressions k1 = 2.56 x 10{sup -11} exp(-8549K/T) and k2 = 6.13 x 10{sup -11} exp(-7364K/T), both in cm3 molecule-1 s-1. Equilibrium constants were calculated from the two Arrhenius expressions in the overlapping temperature range, and good agreement was obtained with themore » literature values. The rate constants for reaction 2 were converted into rate constants for reaction 1 using literature equilibrium constants. These data are indistinguishable from direct k1 measurements, and an Arrhenius fit for the joint set is k{sub 1} = 1.88 x 10{sup -11} exp(-8185K/T) cm3 molecule-1 s-1. The CF{sub 3} + H{sub 2} {yields} CF{sub 3}H + H reaction was further modeled using conventional transition-state theory, which included ab initio electronic structure determinations of reactants, transition state, and products.« less

  5. Reactive ion etching effects on carbon-doped Ge2Sb2Te5 phase change material in CF4/Ar plasma

    NASA Astrophysics Data System (ADS)

    Shen, Lanlan; Song, Sannian; Song, Zhitang; Li, Le; Guo, Tianqi; Liu, Bo; Wu, Liangcai; Cheng, Yan; Feng, Songlin

    2016-10-01

    Recently, carbon-doped Ge2Sb2Te5 (CGST) has been proved to be a high promising material for future phase change memory technology. In this article, reactive ion etching (RIE) of phase change material CGST films is studied using CF4/Ar gas mixture. The effects on gas-mixing ratio, RF power, gas pressure on the etch rate, etch profile and roughness of the CGST film are investigated. Conventional phase change material Ge2Sb2Te5 (GST) films are simultaneously studied for comparison. Compared with GST film, 10 % more CF4 is needed for high etch rate and 10% less CF4 for good anisotropy of CGST due to more fluorocarbon polymer deposition during CF4 etching. The trends of etch rates and roughness of CGST with varying RF power and chamber pressure are similar with those of GST. Furthermore, the etch rate of CGST are more easily to be saturated when higher RF power is applied.

  6. Thermal Properties of Hybrid Carbon Nanotube/Carbon Fiber Polymer

    NASA Technical Reports Server (NTRS)

    Kang, Jin Ho; Cano, Roberto J.; Luong, Hoa; Ratcliffe, James G.; Grimsley, Brian W.; Siochi, Emilie J.

    2016-01-01

    Carbon fiber reinforced polymer (CFRP) composites possess many advantages for aircraft structures over conventional aluminum alloys: light weight, higher strength- and stiffness-to-weight ratio, and low life-cycle maintenance costs. However, the relatively low thermal and electrical conductivities of CFRP composites are deficient in providing structural safety under certain operational conditions such as lightning strikes. One possible solution to these issues is to interleave carbon nanotube (CNT) sheets between conventional carbon fiber (CF) composite layers. However, the thermal and electrical properties of the orthotropic hybrid CNT/CF composites have not been fully understood. In this study, hybrid CNT/CF polymer composites were fabricated by interleaving layers of CNT sheets with Hexcel (Registered Trademark) IM7/8852 prepreg. The CNT sheets were infused with a 5% solution of a compatible epoxy resin prior to composite fabrication. Orthotropic thermal and electrical conductivities of the hybrid polymer composites were evaluated. The interleaved CNT sheets improved the in-plane thermal conductivity of the hybrid composite laminates by about 400% and the electrical conductivity by about 3 orders of magnitude.

  7. Dye Wastewater Cleanup by Graphene Composite Paper for Tailorable Supercapacitors.

    PubMed

    Yu, Dandan; Wang, Hua; Yang, Jie; Niu, Zhiqiang; Lu, Huiting; Yang, Yun; Cheng, Liwei; Guo, Lin

    2017-06-28

    Currently, the energy crisis and environmental pollution are two critical challenges confronted by humans. The development of smart strategies to address the above-mentioned issues simultaneously is significant. As the main accomplices for water pollution, several kinds of organic dyes with intrinsic redox functional groups such as phenothiazines derivatives, anthraquinone, and indigoid dyes are potential candidates for the replacement of the conventional pseudocapacitive materials. In this work, three typical organic dyes can be efficiently removed by a facile adsorption procedure using reduced graphene oxide coated cellulose fiber (rGO@CF) paper. Flexible supercapacitors based on dye/rGO@CF electrodes exhibit excellent electrochemical performances that are superior to or comparable with those of conventional pseudocapacitive materials based devices, presenting a new type of promising electrode materials. Moreover, benefiting from the high flexibility and considerable mechanical strength of the graphene composite paper, the operating potential and capacitance of the devices can be easily adjusted by tailoring the hybrid electrodes into different specific shapes followed by rational integrating. The smart design of these dye/rGO@CF paper based electrodes shows that energy storage and environmental remediation can be achieved simultaneously.

  8. Creep-Fatigue Interaction and Cyclic Strain Analysis in P92 Steel Based on Test

    NASA Astrophysics Data System (ADS)

    Ji, Dongmei; Zhang, Lai-Chang; Ren, Jianxing; Wang, Dexian

    2015-04-01

    This work focused on the interaction of creep and fatigue and cyclic strain analysis in high-chromium ferritic P92 steel based on load-controlled creep-fatigue (CF) tests and conventional creep test at 873 K. Mechanical testing shows that the cyclic load inhibits the propagation of creep damage in the P92 steel and CF interaction becomes more severe with the decrease in the holding period duration and stress ratio. These results are also verified by the analysis of cyclic strain. The fatigue lifetime reduces with the increasing of the holding period duration and it does not reduce much with the increasing stress ratio especially under the conditions of long holding period duration. The cyclic strains (i.e., the strain range and creep strain) of CF tests consist of three stages, which is the same as those for the conventional creep behavior. The microscopic fracture surface observations illustrated that two different kinds of voids are observed at the fracture surfaces and Laves phase precipitates at the bottom of the voids.

  9. Architecture of the local spatial data infrastructure for regional climate change research

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny

    2013-04-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, etc.) are actively used in modeling and analysis of climate change for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their size which might constitute up to tens terabytes for a single dataset studies in the area of climate and environmental change require a special software support based on SDI approach. A dedicated architecture of the local spatial data infrastructure aiming at regional climate change analysis using modern web mapping technologies is presented. Geoportal is a key element of any SDI, allowing searching of geoinformation resources (datasets and services) using metadata catalogs, producing geospatial data selections by their parameters (data access functionality) as well as managing services and applications of cartographical visualization. It should be noted that due to objective reasons such as big dataset volume, complexity of data models used, syntactic and semantic differences of various datasets, the development of environmental geodata access, processing and visualization services turns out to be quite a complex task. Those circumstances were taken into account while developing architecture of the local spatial data infrastructure as a universal framework providing geodata services. So that, the architecture presented includes: 1. Effective in terms of search, access, retrieval and subsequent statistical processing, model of storing big sets of regional georeferenced data, allowing in particular to store frequently used values (like monthly and annual climate change indices, etc.), thus providing different temporal views of the datasets 2. General architecture of the corresponding software components handling geospatial datasets within the storage model 3. Metadata catalog describing in detail using ISO 19115 and CF-convention standards datasets used in climate researches as a basic element of the spatial data infrastructure as well as its publication according to OGC CSW (Catalog Service Web) specification 4. Computational and mapping web services to work with geospatial datasets based on OWS (OGC Web Services) standards: WMS, WFS, WPS 5. Geoportal as a key element of thematic regional spatial data infrastructure providing also software framework for dedicated web applications development To realize web mapping services Geoserver software is used since it provides natural WPS implementation as a separate software module. To provide geospatial metadata services GeoNetwork Opensource (http://geonetwork-opensource.org) product is planned to be used for it supports ISO 19115/ISO 19119/ISO 19139 metadata standards as well as ISO CSW 2.0 profile for both client and server. To implement thematic applications based on geospatial web services within the framework of local SDI geoportal the following open source software have been selected: 1. OpenLayers JavaScript library, providing basic web mapping functionality for the thin client such as web browser 2. GeoExt/ExtJS JavaScript libraries for building client-side web applications working with geodata services. The web interface developed will be similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. The work is partially supported by RF Ministry of Education and Science grant 8345, SB RAS Program VIII.80.2.1 and IP 131.

  10. Rate constants for CF{sub 3} + H{sub 2} {r_arrow} CF{sub 3}H + H and CF{sub 3}H + H {r_arrow} CF{sub 3} + H{sub 2} reactions in the temperature range 1100--1600 K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hranisavljevic, J.; Michael, J.V.

    1998-09-24

    The shock tube technique coupled with H-atom atomic resonance absorption spectrometry has been used to study the reactions (1) CF{sub 3} + H{sub 2} {r_arrow} CF{sub 3}H + H and (2) CF{sub 3}H + H {r_arrow} CF{sub 3} + H{sub 2} over the temperature ranges 1168--1673 K and 1111--1550 K, respectively. The results can be represented by the Arrhenius expressions k{sub 1} = 2.56 {times} 10{sup {minus}11} exp({minus}8549K/T) and k{sub 2} = 6.13 {times} 10{sup {minus}11} exp({minus}7364K/T), both in cm{sup 3} molecule{sup {minus}1} s{sup {minus}1}. Equilibrium constants were calculated from the two Arrhenius expressions in the overlapping temperature range, andmore » good agreement was obtained with the literature values. The rate constants for reaction 2 were converted into rate constants for reaction 1 using literature equilibrium constants. These data are indistinguishable from direct k{sub 1} measurements, and an Arrhenius fit for the joint set is k{sub 1} = 1.88 {times} 10{sup {minus}11} exp({minus}8185K/T) cm{sup 3} molecule{sup {minus}1} s{sup {minus}1}. The CF{sub 3} + H{sub 2} {r_arrow} CF{sub 3}H + H reaction was further modeled using conventional transition-state theory, which included ab initio electronic structure determinations of reactants, transition state, and products.« less

  11. A Study of Cognitive Linguistic Structure Based on the Four Conditions of the Mulamadhyamakakarika

    ERIC Educational Resources Information Center

    You, Hee Jong

    2013-01-01

    The main purpose of this study is to depict Nagarjuna's implication on how he redefined the Four Conditions ("atvarah pratyaya") as the cognitive linguistic structure by allocating 32 functional metadata throughout the texts of Mulamadhyamakakarika (MMK). Following subtle traces of "okasamvrtisatya" (the conventional truth) in…

  12. Patient Preferences and Physician Practice Patterns Regarding Breast Radiotherapy

    DTIC Science & Technology

    2011-01-01

    breast irradiation (HF-WBI) 62%, partial breast irradiation ( PBI ) 28%, and conventionally fractionated whole breast irradiation (CF-WBI) 10%. By...comparison, 82% of physicians use CF-WBI for more than 2/3 of women and 56% never use HF-WBI. With respect to PBI , 62% of women preferred three...dimensional (3D)- PBI and 38% favor brachytherapy- PBI , whereas 36% of physicians offer 3D- PBI and 66% offer brachytherapy- PBI . 70% of women prefer once-daily

  13. Carbon footprint of conventional and organic beef production systems: An Italian case study.

    PubMed

    Buratti, C; Fantozzi, F; Barbanera, M; Lascaro, E; Chiorri, M; Cecchini, L

    2017-01-15

    Beef cattle production is a widespread activity in Italy in the agricultural field and determines an important impact on environment and resources consumption. Carbon footprint evaluation is thus necessary to evaluate the contributions of the different stages and the possible improvements of the production chain. In this study, two typical Italian beef production systems, a conventional and an organic one are investigated in order to evaluate the greenhouse gas emissions from "cradle to gate farm" by a Life Cycle Assessment (LCA) approach; the carbon footprint (CF) per 1kg of live weight meat is calculated. The contributions from feed production, enteric fermentation, and manure management are taken into account, in order to compare the life cycle of the two productions; also the carbon balance in soil is evaluated, in order to verify the impact in a life cycle perspective. The results of CF calculation of the two farms show that organic system (24.62kgCO 2eq /kg live weight) produce more GHG emissions than the conventional one (18.21kgCO 2eq /kg live weight) and that the enteric fermentation is the more heavy contribution, with a range of 50-54% of the global CF value. Improvements of the production chain could be realized by accurate feeding strategies, in order to obtain reduction of methane emissions from enteric digestion of cattles. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Effects of sintering additives on the microstructural and mechanical properties of the ion-irradiated SiCf/SiC

    NASA Astrophysics Data System (ADS)

    Fitriani, Pipit; Sharma, Amit Siddharth; Yoon, Dang-Hyok

    2018-05-01

    SiCf/SiC composites containing three different types of sintering additives viz. Sc-nitrate, Al2O3-Sc2O3, and Al2O3-Y2O3, were subjected to ion irradiation using 0.2 MeV H+ ions with a fluence of 3 × 1020 ions/m2 at room temperature. Although all composites showed volumetric swelling upon ion irradiation, SiCf/SiC with Sc-nitrate showed the smallest change followed by those with the Al2O3-Sc2O3 and Al2O3-Y2O3 additives. In particular, SiCf/SiC containing the conventional Al2O3-Y2O3 additive revealed significant microstructural changes, such as surface roughening and the formation of cracks and voids, resulting in reduced fiber pullout upon irradiation. On the other hand, the SiCf/SiC with Sc-nitrate showed the highest resistance against ion irradiation without showing any macroscopic changes in surface morphology and mechanical strength, indicating the importance of the sintering additive in NITE-based SiCf/SiC for nuclear structural applications.

  15. New approach for cystic fibrosis diagnosis based on chloride/potassium ratio analyzed in non-invasively obtained skin-wipe sweat samples by capillary electrophoresis with contactless conductometric detection.

    PubMed

    Ďurč, Pavol; Foret, František; Pokojová, Eva; Homola, Lukáš; Skřičková, Jana; Herout, Vladimír; Dastych, Milan; Vinohradská, Hana; Kubáň, Petr

    2017-05-01

    A new approach for sweat analysis used in cystic fibrosis (CF) diagnosis is proposed. It consists of a noninvasive skin-wipe sampling followed by analysis of target ions using capillary electrophoresis with contactless conductometric detection (C4D). The skin-wipe sampling consists of wiping a defined skin area with precleaned cotton swab moistened with 100 μL deionized water. The skin-wipe sample is then extracted for 3 min into 400 μL deionized water, and the extract is analyzed directly. The developed sampling method is cheap, simple, fast, and painless, and can replace the conventional pilocarpine-induced sweat chloride test commonly applied in CF diagnosis. The aqueous extract of the skin-wipe sample content is analyzed simultaneously by capillary electrophoresis with contactless conductometric detection using a double opposite end injection. A 20 mmol/L L-histidine/2-(N-morpholino)ethanesulfonic acid and 2 mmol/L 18-crown-6 at pH 6 electrolyte can separate all the major ions in less than 7 min. Skin-wipe sample extracts from 30 study participants-ten adult patients with CF (25-50 years old), ten pediatric patients with CF (1-15 years old), and ten healthy control individuals (1-18 years old)-were obtained and analyzed. From the analyzed ions in all samples, a significant difference between chloride and potassium concentrations was found in the CF patients and healthy controls. We propose the use of the Cl - /K + ratio rather than the absolute Cl - concentration and a cutoff value of 4 in skin-wipe sample extracts as an alternative to the conventional sweat chloride analysis. The proposed Cl - /K + ion ratio proved to be a more reliable indicator, is independent of the patient's age, and allows better differentiation between non-CF individuals and CF patients having intermediate values on the Cl - sweat test. Figure New approach for cystic fibrosis diagnosis based on skin-wipe sampling of forearm and analysis of ionic content (Cl - /K + ratio) in skin-wipe extracts by capillary electrophoresis with contactless conductometric detection.

  16. Adjuvant Hypofractionated Versus Conventional Whole Breast Radiation Therapy for Early-Stage Breast Cancer: Long-Term Hospital-Related Morbidity From Cardiac Causes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Elisa K.; Woods, Ryan; McBride, Mary L.

    Purpose: The risk of cardiac injury with hypofractionated whole-breast/chest wall radiation therapy (HF-WBI) compared with conventional whole-breast/chest wall radiation therapy (CF-WBI) in women with left-sided breast cancer remains a concern. The purpose of this study was to determine if there is an increase in hospital-related morbidity from cardiac causes with HF-WBI relative to CF-WBI. Methods and Materials: Between 1990 and 1998, 5334 women ≤80 years of age with early-stage breast cancer were treated with postoperative radiation therapy to the breast or chest wall alone. A population-based database recorded baseline patient, tumor, and treatment factors. Hospital administrative records identified baseline cardiacmore » risk factors and other comorbidities. Factors between radiation therapy groups were balanced using a propensity-score model. The first event of a hospital admission for cardiac causes after radiation therapy was determined from hospitalization records. Ten- and 15-year cumulative hospital-related cardiac morbidity after radiation therapy was estimated for left- and right-sided cases using a competing risk approach. Results: The median follow-up was 13.2 years. For left-sided cases, 485 women were treated with CF-WBI, and 2221 women were treated with HF-WBI. Mastectomy was more common in the HF-WBI group, whereas boost was more common in the CF-WBI group. The CF-WBI group had a higher prevalence of diabetes. The 15-year cumulative hospital-related morbidity from cardiac causes (95% confidence interval) was not different between the 2 radiation therapy regimens after propensity-score adjustment: 21% (19-22) with HF-WBI and 21% (17-25) with CF-WBI (P=.93). For right-sided cases, the 15-year cumulative hospital-related morbidity from cardiac causes was also similar between the radiation therapy groups (P=.76). Conclusions: There is no difference in morbidity leading to hospitalization from cardiac causes among women with left-sided early-stage breast cancer treated with HF-WBI or CF-WBI at 15-year follow-up.« less

  17. Lessons in weather data interoperability: the National Mesonet Program

    NASA Astrophysics Data System (ADS)

    Evans, J. D.; Werner, B.; Cogar, C.; Heppner, P.

    2015-12-01

    The National Mesonet Program (NMP) links local, state, and regional surface weather observation networks (a.k.a. mesonets) to enhance the prediction of high-impact, local-scale weather events. A consortium of 23 (and counting) private firms, state agencies, and universities provides near-real-time observations from over 7,000 fixed weather stations, and over 1,000 vehicle-mounted sensors, every 15 minutes or less, together with the detailed sensor and station metadata required for effective forecasts and decision-making. In order to integrate these weather observations across the United States, and to provide full details about sensors, stations, and observations, the NMP has defined a set of conventions for observational data and sensor metadata. These conventions address the needs of users with limited bandwidth and computing resources, while also anticipating a growing variety of sensors and observations. For disseminating weather observation data, the NMP currently employs a simple ASCII format derived from the Integrated Ocean Observing System. This simplifies data ingest into common desktop software, and parsing by simple scripts; and it directly supports basic readings of temperature, pressure, etc. By extending the format to vector-valued observations, it can also convey readings taken at different altitudes (e.g. windspeed) or depths (e.g., soil moisture). Extending beyond these observations to fit a greater variety of sensors (solar irradiation, sodar, radar, lidar) may require further extensions, or a move to more complex formats (e.g., based on XML or JSON). We will discuss the tradeoffs of various conventions for different users and use cases. To convey sensor and station metadata, the NMP uses a convention known as Starfish Fungus Language (*FL), derived from the Open Geospatial Consortium's SensorML standard. *FL separates static and dynamic elements of a sensor description, allowing for relatively compact expressions that reference a library of shared definitions (e.g., sensor manufacturer's specifications) alongside time-varying and site-specific details (slope / aspect, calibration, etc.) We will discuss the tradeoffs of *FL, SensorML, and alternatives for conveying sensor details to various users and uses.

  18. Direct sampling of cystic fibrosis lungs indicates that DNA-based analyses of upper-airway specimens can misrepresent lung microbiota.

    PubMed

    Goddard, Amanda F; Staudinger, Benjamin J; Dowd, Scot E; Joshi-Datar, Amruta; Wolcott, Randall D; Aitken, Moira L; Fligner, Corinne L; Singh, Pradeep K

    2012-08-21

    Recent work using culture-independent methods suggests that the lungs of cystic fibrosis (CF) patients harbor a vast array of bacteria not conventionally implicated in CF lung disease. However, sampling lung secretions in living subjects requires that expectorated specimens or collection devices pass through the oropharynx. Thus, contamination could confound results. Here, we compared culture-independent analyses of throat and sputum specimens to samples directly obtained from the lungs at the time of transplantation. We found that CF lungs with advanced disease contained relatively homogenous populations of typical CF pathogens. In contrast, upper-airway specimens from the same subjects contained higher levels of microbial diversity and organisms not typically considered CF pathogens. Furthermore, sputum exhibited day-to-day variation in the abundance of nontypical organisms, even in the absence of clinical changes. These findings suggest that oropharyngeal contamination could limit the accuracy of DNA-based measurements on upper-airway specimens. This work highlights the importance of sampling procedures for microbiome studies and suggests that methods that account for contamination are needed when DNA-based methods are used on clinical specimens.

  19. A Monte Carlo simulation and setup optimization of output efficiency to PGNAA thermal neutron using 252Cf neutrons

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Zhao; Tuo, Xian-Guo

    2014-07-01

    We present the design and optimization of a prompt γ-ray neutron activation analysis (PGNAA) thermal neutron output setup based on Monte Carlo simulations using MCNP5 computer code. In these simulations, the moderator materials, reflective materials, and structure of the PGNAA 252Cf neutrons of thermal neutron output setup are optimized. The simulation results reveal that the thin layer paraffin and the thick layer of heavy water moderating effect work best for the 252Cf neutron spectrum. Our new design shows a significantly improved performance of the thermal neutron flux and flux rate, that are increased by 3.02 times and 3.27 times, respectively, compared with the conventional neutron source design.

  20. Investigation of optimal acquisition time of myocardial perfusion scintigraphy using cardiac focusing-collimator

    NASA Astrophysics Data System (ADS)

    Niwa, Arisa; Abe, Shinji; Fujita, Naotoshi; Kono, Hidetaka; Odagawa, Tetsuro; Fujita, Yusuke; Tsuchiya, Saki; Kato, Katsuhiko

    2015-03-01

    Recently myocardial perfusion SPECT imaging acquired using the cardiac focusing-collimator (CF) has been developed in the field of nuclear cardiology. Previously we have investigated the basic characteristics of CF using physical phantoms. This study was aimed at determining the acquisition time for CF that enables to acquire the SPECT images equivalent to those acquired by the conventional method in 201TlCl myocardial perfusion SPECT. In this study, Siemens Symbia T6 was used by setting the torso phantom equipped with the cardiac, pulmonary, and hepatic components. 201TlCl solution were filled in the left ventricular (LV) myocardium and liver. Each of CF, the low energy high resolution collimator (LEHR), and the low medium energy general purpose collimator (LMEGP) was set on the SPECT equipment. Data acquisitions were made by regarding the center of the phantom as the center of the heart in CF at various acquisition times. Acquired data were reconstructed, and the polar maps were created from the reconstructed images. Coefficient of variation (CV) was calculated as the mean counts determined on the polar maps with their standard deviations. When CF was used, CV was lower at longer acquisition times. CV calculated from the polar maps acquired using CF at 2.83 min of acquisition time was equivalent to CV calculated from those acquired using LEHR in a 180°acquisition range at 20 min of acquisition time.

  1. The 3849 + 10 kB C-->T mutation in a 21-year-old patient with cystic fibrosis.

    PubMed

    Kaplan, D M; Niv, A; Aviram, M; Parvari, R; Leiberman, A; Fliss, D M

    1996-12-01

    Cystic fibrosis (CF) is the most common lethal inherited disease in the white population. It is characterized by exocrine gland epithelia dysfunction, which leads to pulmonary and pancreatic insufficiency. Since the cloning of the CF gene in 1989 and the identification of the most common CF mutation (delta F508), more than 400 different mutations have been described. These mutations appear to contribute to the heterogeneity of the CF phenotype and several reports have speculated on the relationship between the most common CF mutations and the patient's clinical status. We report the case of a 21-year-old woman with longstanding chronic pansinusitis, nasal polyposis, chronic cough and severe nasal crusting. During a period of five years she had been followed by her otolaryngologist and pediatric pulmonologist. Sweat tests performed at the age of 17 and 18 were within normal limits and she underwent repeated conventional sinonasal procedures, with no improvement in her clinical status. On her present admission, sweat tests showed a 70 meq/l chloride concentration. The diagnosis of CF was then confirmed by DNA analysis and the patient was found to carry the 3849 + 10 kB C-->T mutation. The early detection of this newly recognized form of CF in adults as well as in children presenting with sinonasal symptoms is critical for life expectancy and quality.

  2. Biomechanical fatigue analysis of an advanced new carbon fiber/flax/epoxy plate for bone fracture repair using conventional fatigue tests and thermography.

    PubMed

    Bagheri, Zahra S; El Sawi, Ihab; Bougherara, Habiba; Zdero, Radovan

    2014-07-01

    The current study is part of an ongoing research program to develop an advanced new carbon fiber/flax/epoxy (CF/flax/epoxy) hybrid composite with a “sandwich structure” as a substitute for metallic materials for orthopedic long bone fracture plate applications. The purpose of this study was to assess the fatigue properties of this composite, since cyclic loading is one of the main types of loads carried by a femur fracture plate during normal daily activities. Conventional fatigue testing, thermographic analysis, and scanning electron microscopy (SEM) were used to analyze the damage progress that occurred during fatigue loading. Fatigue strength obtained using thermography analysis (51% of ultimate tensile strength) was confirmed using the conventional fatigue test (50–55% of ultimate tensile strength). The dynamic modulus (E⁎) was found to stay almost constant at 47 GPa versus the number of cycles, which can be related to the contribution of both flax/epoxy and CF/epoxy laminae to the stiffness of the composite. SEM images showed solid bonding at the CF/epoxy and flax/epoxy laminae, with a crack density of only 0.48% for the plate loaded for 2 million cycles. The current composite plate showed much higher fatigue strength than the main loads experienced by a typical patient during cyclic activities; thus, it may be a potential candidate for bone fracture plate applications. Moreover, the fatigue strength from thermographic analysis was the same as that obtained by the conventional fatigue tests, thus demonstrating its potential use as an alternate tool to rapidly evaluate fatigue strength of composite biomaterials.

  3. Biomechanical fatigue analysis of an advanced new carbon fiber/flax/epoxy plate for bone fracture repair using conventional fatigue tests and thermography.

    PubMed

    Bagheri, Zahra S; El Sawi, Ihab; Bougherara, Habiba; Zdero, Radovan

    2014-07-01

    The current study is part of an ongoing research program to develop an advanced new carbon fiber/flax/epoxy (CF/flax/epoxy) hybrid composite with a "sandwich structure" as a substitute for metallic materials for orthopedic long bone fracture plate applications. The purpose of this study was to assess the fatigue properties of this composite, since cyclic loading is one of the main types of loads carried by a femur fracture plate during normal daily activities. Conventional fatigue testing, thermographic analysis, and scanning electron microscopy (SEM) were used to analyze the damage progress that occurred during fatigue loading. Fatigue strength obtained using thermography analysis (51% of ultimate tensile strength) was confirmed using the conventional fatigue test (50-55% of ultimate tensile strength). The dynamic modulus (E(⁎)) was found to stay almost constant at 47GPa versus the number of cycles, which can be related to the contribution of both flax/epoxy and CF/epoxy laminae to the stiffness of the composite. SEM images showed solid bonding at the CF/epoxy and flax/epoxy laminae, with a crack density of only 0.48% for the plate loaded for 2 million cycles. The current composite plate showed much higher fatigue strength than the main loads experienced by a typical patient during cyclic activities; thus, it may be a potential candidate for bone fracture plate applications. Moreover, the fatigue strength from thermographic analysis was the same as that obtained by the conventional fatigue tests, thus demonstrating its potential use as an alternate tool to rapidly evaluate fatigue strength of composite biomaterials. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. The application of PA/CF in stab resistance body armor

    NASA Astrophysics Data System (ADS)

    Yuan, M. Q.; Liu, Y.; Gong, Z.; Qian, X. M.

    2017-06-01

    Stab resistance body armor (SRBA) is an essential defensive equipment to protect human body against injuries from stabbing. The conventional SRBAs shared low wearing frequency since they are heavy and poor in flexibility. This paper designed a structured stab-resistance plate using the model of crocodile armor and manufactured using 3D printing technology-laser sintering (LS). CF(Carbon fiber) was applied to enhance the stab resistance properties of SRBA. The effects of the material and structure were analysed through the stab resistance property tests based on the national standard GA68-2008. It is found that the stab resistance property of flat plates sintered by PA powder and PA/CF are both weaker than that of the structured plate. The penetrating depth of PA/CF structured plate is significantly 2-mm-less than the pure PA structured plate. The SEM observations confirmed the conclusion that addition of the CF largely improved the plate stab resistance property. Moreover, using PA/CF structured plate to produce the stab resistance body armor would result in a weight reduction by about 30-40% as compared to the existing SRBA that was made up of metal plates, which could largely reduce the wearer physical burden and improve the wearing frequency.

  5. An analysis on how switching to a more balanced and naturally improved milk would affect consumer health and the environment.

    PubMed

    Roibás, Laura; Martínez, Ismael; Goris, Alfonso; Barreiro, Rocío; Hospido, Almudena

    2016-10-01

    This study compares a premium brand of UHT milk, Unicla, characterised by an improved nutritional composition, to conventional milk, in terms of health effects and environmental impacts. Unlike enriched milks, in which nutrients are added to the final product, Unicla is obtained naturally by improving the diet of the dairy cows. Health effects have been analysed based on literature findings, while the environmental analysis focused on those spheres of the environment where milk is expected to cause the higher impacts, and thus carbon (CF) and water footprints (WF) have been determined. Five final products have been compared: 3 conventional (skimmed, semi-skimmed, whole) and 2 Unicla (skimmed, semi-skimmed) milks. As a functional unit, one litre of packaged UHT milk entering the regional distribution centre has been chosen. The improved composition of Unicla milk is expected to decrease the risk of cardiovascular disease and to protect consumers against oxidative damage, among other health benefits. Concerning the environmental aspect, CF of Unicla products are, on average, 10% lower than their conventional equivalents, mainly due to the lower enteric emissions of caused by the Unicla diet. No significant differences were found between the WF of Unicla and conventional milk. Raw milk is the main contributor to both footprints (on average, 83.2 and 84.3% of the total CF of Unicla and conventional milk, respectively, and 99.9% of WF). The results have been compared to those found in literature, and a sensitivity analysis has been performed to verify their robustness. The study concludes that switching to healthier milk compositions can help slowing down global warming, without contributing to other environmental issues such as water scarcity. The results should encourage other milk companies to commit to the development of healthier, less environmentally damaging products, and also to stimulate consumers to bet on them. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.

    ERIC Educational Resources Information Center

    Johnson, Ian

    TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…

  7. Multiplex real time PCR panels to identify fourteen colonization factors of enterotoxigenic Escherichia coli (ETEC).

    PubMed

    Liu, Jie; Silapong, Sasikorn; Jeanwattanalert, Pimmada; Lertsehtakarn, Paphavee; Bodhidatta, Ladaporn; Swierczewski, Brett; Mason, Carl; McVeigh, Annette L; Savarino, Stephen J; Nshama, Rosemary; Mduma, Esto; Maro, Athanasia; Zhang, Jixian; Gratz, Jean; Houpt, Eric R

    2017-01-01

    Enterotoxigenic Escherichia coli (ETEC) is a leading cause of childhood diarrhea in low income countries and in travelers to those areas. Inactivated enterotoxins and colonization factors (CFs) are leading vaccine candidates, therefore it is important to determine the prevailing CF types in different geographic locations and populations. Here we developed real time PCR (qPCR) assays for 14 colonization factors, including the common vaccine targets. These assays, along with three enterotoxin targets (STh, STp, and LT) were formulated into three 5-plex qPCR panels, and validated on 120 ETEC isolates and 74 E. coli colony pools. The overall sensitivity and specificity was 99% (199/202) and 99% (2497/2514), respectively, compared to the CF results obtained with conventional PCR. Amplicon sequencing of discrepant samples revealed that the qPCR was 100% accurate. qPCR panels were also performed on nucleic acid extracted from stool and compared to the results of the ETEC isolates or E. coli colony pools cultured from them. 95% (105/110) of the CF detections in the cultures were confirmed in the stool. Additionally, direct testing of stool yielded 30 more CF detections. Among 74 randomly selected E. coli colony pools with paired stool, at least one CF was detected in 63% (32/51) of the colony pools while at least one CF was detected in 78% (47/60) of the stool samples (P = NS). We conclude that these ETEC CF assays can be used on both cultures and stool samples to facilitate better understanding of CF distribution for ETEC epidemiology and vaccine development.

  8. Resistive switching mechanism in the one diode-one resistor memory based on p+-Si/n-ZnO heterostructure revealed by in-situ TEM

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Zhu, Liang; Li, Xiaomei; Xu, Zhi; Wang, Wenlong; Bai, Xuedong

    2017-03-01

    One diode-one resistor (1D1R) memory is an effective architecture to suppress the crosstalk interference, realizing the crossbar network integration of resistive random access memory (RRAM). Herein, we designed a p+-Si/n-ZnO heterostructure with 1D1R function. Compared with the conventional multilayer 1D1R devices, the structure and fabrication technique can be largely simplified. The real-time imaging of formation/rupture process of conductive filament (CF) process demonstrated the RS mechanism by in-situ transmission electron microscopy (TEM). Meanwhile, we observed that the formed CF is only confined to the outside of depletion region of Si/ZnO pn junction, and the formation of CF does not degrade the diode performance, which allows the coexistence of RS and rectifying behaviors, revealing the 1D1R switching model. Furthermore, it has been confirmed that the CF is consisting of the oxygen vacancy by in-situ TEM characterization.

  9. High-efficiency l-lactic acid production by Rhizopus oryzae using a novel modified one-step fermentation strategy.

    PubMed

    Fu, Yong-Qian; Yin, Long-Fei; Zhu, Hua-Yue; Jiang, Ru

    2016-10-01

    In this study, lactic acid fermentation by Rhizopus oryzae was investigated using the two different fermentation strategies of one-step fermentation (OSF) and conventional fermentation (CF). Compared to CF, OSF reduced the demurrage of the production process and increased the production of lactic acid. However, the qp was significantly lower than during CF. Based on analysis of μ, qs and qp, a novel modified OSF strategy was proposed. This strategy aimed to achieve a high final concentration of lactic acid, and a high qp by R. oryzae. In this strategy, the maximum lactic acid concentration and productivity of the lactic acid production stage reached 158g/l and 5.45g/(lh), which were 177% and 366% higher, respectively, than the best results from CF. Importantly, the qp and yield did not decrease. This strategy is a convenient and economical method for l-lactic acid fermentation by R. oryzae. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Harmonising and semantically linking key variables from in-situ observing networks of an Integrated Atlantic Ocean Observing System, AtlantOS

    NASA Astrophysics Data System (ADS)

    Darroch, Louise; Buck, Justin

    2017-04-01

    Atlantic Ocean observation is currently undertaken through loosely-coordinated, in-situ observing networks, satellite observations and data management arrangements at regional, national and international scales. The EU Horizon 2020 AtlantOS project aims to deliver an advanced framework for the development of an Integrated Atlantic Ocean Observing System that strengthens the Global Ocean Observing System (GOOS) and contributes to the aims of the Galway Statement on Atlantic Ocean Cooperation. One goal is to ensure that data from different and diverse in-situ observing networks are readily accessible and useable to a wider community, including the international ocean science community and other stakeholders in this field. To help achieve this goal, the British Oceanographic Data Centre (BODC) produced a parameter matrix to harmonise data exchange, data flow and data integration for the key variables acquired by multiple in-situ AtlantOS observing networks such as ARGO, Seafloor Mapping and OceanSITES. Our solution used semantic linking of controlled vocabularies and metadata for parameters that were "mappable" to existing EU and international standard vocabularies. An AtlantOS Essential Variables list of terms (aggregated level) based on Global Climate Observing System (GCOS) Essential Climate Variables (ECV), GOOS Essential Ocean Variables (EOV) and other key network variables was defined and published on the Natural Environment Research Council (NERC) Vocabulary Server (version 2.0) as collection A05 (http://vocab.nerc.ac.uk/collection/A05/current/). This new vocabulary was semantically linked to standardised metadata for observed properties and units that had been validated by the AtlantOS community: SeaDataNet parameters (P01), Climate and Forecast (CF) Standard Names (P07) and SeaDataNet units (P06). Observed properties were mapped to biological entities from the internationally assured AphiaID from the WOrld Register of Marine Species (WoRMS), http://www.marinespecies.org/aphia.php?p=webservice. The AtlantOS parameter matrix offers a way to harmonise the globally important variables (such as ECVs and EOVs) from in-situ observing networks that use different flavours of exchange formats based on SeaDataNet and CF parameter metadata. It also offers a way to standardise data in the wider Integrated Ocean Observing System. It uses sustainable and trusted standardised vocabularies that are governed by internationally renowned and long-standing organisations and is interoperable through the use of persistent resource identifiers, such as URNs and PURLs. It is the first step to integrating and serving data in a variety of international exchange formats using Application programming interfaces (API) improving both data discoverability and utility for users.

  11. Motility and Adhesiveness in Human Neutrophils

    PubMed Central

    Smith, C. Wayne; Hollers, James C.; Patrick, Richard A.; Hassett, Clare

    1979-01-01

    Human peripheral blood neutrophils (PMN) obtained from healthy adults were examined in vitro with techniques adapted to assess the effects of chemotactic factors (CF) on cellular configuration and adhesiveness. The results were compared with those that use certain conventional techniques for assessing chemotaxis and chemokinesis. Exposure of PMN to N-formyl-l-methionyl-l-phenylalanine (f-Met-Phe), zymosan-activated serum, bacterial chemotactic factor, or a low molecular weight chemotactic factor from activated serum (C5a) in the absence of a gradient resulted in a change in cellular shape from a spherical to a polarized configuration in a high percentage of cells. This occurred rapidly in suspension, under conditions designed to exclude a role for cell adhesiveness, and was reversible upon removal of the CF. Restimulation of cells with the CF resulted in reappearance of the polarized configuration to the same extent as on initial stimulation with one exception: f-Met-Phe pretreated cells failed to respond to f-Met-Phe, though they responded fully to the other CF. Each CF caused a significant increase in PMN attachment to protein-coated glass. This enhanced adhesiveness was not reversible upon removal of the CF when the cells were treated under conditions shown to produce chemotactic deactivation. Cells treated under these conditions also exhibited significantly reduced motility on glass and in micropore filters in the absence of a gradient of CF. Bacterial chemotactic factor, even at high concentrations, failed to produce deactivation and did not cause a sustained enhancement of adhesiveness. Images PMID:372238

  12. Independent associations of physical activity and cardiovascular fitness with cardiovascular risk in adults.

    PubMed

    Ekblom-Bak, Elin; Hellénius, Mai-Lis; Ekblom, Orjan; Engström, Lars-Magnus; Ekblom, Björn

    2010-04-01

    Uncertainty still exists whether physical activity (PA) and cardiovascular fitness (CF) contribute separately to cardiovascular disease (CVD) risk. This study examined the associations of PA and CF on individual as well as clustered CVD risk factors. Cross-sectional. Seven hundred and eighty-one men and 890 women, aged 20-65 years, from two random population-based samples of Swedish women and men were included. PA was assessed by questionnaire and CF was predicted by a submaximal cycle ergometry test. Waist circumference, blood pressure, and fasting levels of blood lipids were assessed and dichotomized by conventional cut-off points. Participants reporting high PA level benefited from lower triglycerides and atherogenic cholesterol levels, regardless of CF. Higher CF level was, regardless of PA, associated with lower risk for all risk factors. With regard to clustering of risk factors, each higher CF level was associated with a gradually reduced risk by half or more, independent of PA. Furthermore, being unfit but reporting high PA was associated with a 50% lower risk compared with being unfit and inactive. Furthermore, high reported PA was associated with an additional reduced risk among fit participants. In addition, an excess risk of interaction was found for waist circumference, triglycerides, and the clustered CVD risk between neither being sufficiently active nor being fit. This study suggests that both PA and CF are independently associated with lower cardiovascular risk, and that both variables should be taken into account when CVD risk is estimated.

  13. Putting User Stories First: Experiences Adapting the Legacy Data Models and Information Architecture at NASA JPL's PO.DAAC to Accommodate the New Information Lifecycle Required by SWOT

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Hausman, J.; Laurencelle, J. C.; Toaz, R., Jr.; McAuley, J.; Freeborn, D. J.; Stoner, C.

    2016-12-01

    The Surface Water & Ocean Topography (SWOT) mission brings together two communities focused on a better understanding of the world's oceans and its terrestrial surface waters. U.S. and French oceanographers and hydrologists and international partners have joined forces to develop this new space mission. At NASA JPL's PO.DAAC, the team is currently engaged in the gathering of SWOT User Stores (access patterns, metadata requirements, primary and value added product requirements, data access protocols, etc.) to better inform the adaptive planning of what will be known as the next generation PO.DAAC Information Architecture (IA). The IA effort acknowledges that missions such as SWOT (and NISAR) have few or no precedent in terms of data volume, hot and cold storage, archival, analysis, existing system engineering complexities, etc. and that the only way we can better understand the projected impacts of such requirements is to interface directly with the User Community. Additionally, it also acknowledges that collective learning has taken place to understand certain limitations in the existing data models (DM) underlying the existing PO.DAAC Data Management and Archival System. This work documents an evolutionary, use case based, standards driven approach to adapting the legacy DM and accompanying knowledge representation infrastructure at NASA JPL's PO.DAAC to address forthcoming DAAC mission requirements presented by missions such as SWOT. Some of the topics covered in this evolution include, but are not limited to: How we are leveraging lessons learned from the development of existing DM (such as that generated for SMAP) in an attempt to map them to SWOT. What is the governance model for the SWOT IA? What are the `governing' entities? What is the hierarchy of the `governed entities'? How are elements grouped? How is the design-working group formed? How is model independence maintained and what choices/requirements do we have for the implementation language? The use of Standards such as CF Conventions, NetCDF, HDF and ISO Metadata, etc. Beyond SWOT… what choices were made such that the new PO.DAAC IA will flexible enough and adequately design such that future missions with even more advanced requirements can be accommodated within PO.DAAC.

  14. Accelerated versus conventional fractionated postoperative radiotherapy for advanced head and neck cancer: results of a multicenter Phase III study.

    PubMed

    Sanguineti, Giuseppe; Richetti, Antonella; Bignardi, Mario; Corvo', Renzo; Gabriele, Pietro; Sormani, Maria Pia; Antognoni, Paolo

    2005-03-01

    To determine whether, in the postoperative setting, accelerated fractionation (AF) radiotherapy (RT) yields a superior locoregional control rate compared with conventional fractionation (CF) RT in locally advanced squamous cell carcinomas of the oral cavity, oropharynx, larynx, or hypopharynx. Patients from four institutions with one or more high-risk features (pT4, positive resection margins, pN >1, perineural/lymphovascular invasion, extracapsular extension, subglottic extension) after surgery were randomly assigned to either RT with one daily session of 2 Gy up to 60 Gy in 6 weeks or AF. Accelerated fractionation consisted of a "biphasic concomitant boost" schedule, with the boost delivered during the first and last weeks of treatment, to deliver 64 Gy in 5 weeks. Informed consent was obtained. The primary endpoint of the study was locoregional control. Analysis was on an intention-to-treat basis. From March 1994 to August 2000, 226 patients were randomized. At a median follow-up of 30.6 months (range, 0-110 months), 2-year locoregional control estimates were 80% +/- 4% for CF and 78% +/- 5% for AF (p = 0.52), and 2-year overall survival estimates were 67% +/- 5% for CF and 64% +/- 5% for AF (p = 0.84). The lack of difference in outcome between the two treatment arms was confirmed by multivariate analysis. However, interaction analysis with median values as cut-offs showed a trend for improved locoregional control for those patients who had a delay in starting RT and who were treated with AF compared with those with a similar delay but who were treated with CF (hazard ratio = 0.5, 95% confidence interval 0.2-1.1). Fifty percent of patients treated with AF developed confluent mucositis, compared with only 27% of those treated with CF (p = 0.006). However, mucositis duration was not different between arms. Although preliminary, actuarial Grade 3+ late toxicity estimates at 2 years were 18% +/- 4% and 27% +/- 6% for CF and AF, respectively (p = 0.10). Accelerated fractionation does not seem to be worthwhile for squamous cell carcinoma of the head and neck after resection; however, AF might be an option for patients who delay starting RT.

  15. Highly sensitive detection of ESR1 mutations in cell-free DNA from patients with metastatic breast cancer using molecular barcode sequencing.

    PubMed

    Masunaga, Nanae; Kagara, Naofumi; Motooka, Daisuke; Nakamura, Shota; Miyake, Tomohiro; Tanei, Tomonori; Naoi, Yasuto; Shimoda, Masafumi; Shimazu, Kenzo; Kim, Seung Jin; Noguchi, Shinzaburo

    2018-01-01

    We aimed to develop a highly sensitive method to detect ESR1 mutations in cell-free DNA (cfDNA) using next-generation sequencing with molecular barcode (MB-NGS) targeting the hotspot segment (c.1600-1713). The sensitivity of MB-NGS was tested using serially diluted ESR1 mutant DNA and then cfDNA samples from 34 patients with metastatic breast cancer were analyzed with MB-NGS. The results of MB-NGS were validated in comparison with conventional NGS and droplet digital PCR (ddPCR). MB-NGS showed a higher sensitivity (0.1%) than NGS without barcode (1%) by reducing background errors. Of the cfDNA samples from 34 patients with metastatic breast cancer, NGS without barcode revealed seven mutations in six patients (17.6%) and MB-NGS revealed six additional mutations including three mutations not reported in the COSMIC database of breast cancer, resulting in total 13 ESR1 mutations in ten patients (29.4%). Regarding the three hotspot mutations, all the patients with mutations detected by MB-NGS had identical mutations detected by droplet digital PCR (ddPCR), and mutant allele frequency correlated very well between both (r = 0.850, p < 0.01). Moreover, all the patients without these mutations by MB-NGS were found to have no mutations by ddPCR. In conclusion, MB-NGS could successfully detect ESR1 mutations in cfDNA with a higher sensitivity of 0.1% than conventional NGS and was considered as clinically useful as ddPCR.

  16. Quantification of transplant-derived circulating cell-free DNA in absence of a donor genotype

    PubMed Central

    Kharbanda, Sandhya; Koh, Winston; Martin, Lance R.; Khush, Kiran K.; Valantine, Hannah; Pritchard, Jonathan K.; De Vlaminck, Iwijn

    2017-01-01

    Quantification of cell-free DNA (cfDNA) in circulating blood derived from a transplanted organ is a powerful approach to monitoring post-transplant injury. Genome transplant dynamics (GTD) quantifies donor-derived cfDNA (dd-cfDNA) by taking advantage of single-nucleotide polymorphisms (SNPs) distributed across the genome to discriminate donor and recipient DNA molecules. In its current implementation, GTD requires genotyping of both the transplant recipient and donor. However, in practice, donor genotype information is often unavailable. Here, we address this issue by developing an algorithm that estimates dd-cfDNA levels in the absence of a donor genotype. Our algorithm predicts heart and lung allograft rejection with an accuracy that is similar to conventional GTD. We furthermore refined the algorithm to handle closely related recipients and donors, a scenario that is common in bone marrow and kidney transplantation. We show that it is possible to estimate dd-cfDNA in bone marrow transplant patients that are unrelated or that are siblings of the donors, using a hidden Markov model (HMM) of identity-by-descent (IBD) states along the genome. Last, we demonstrate that comparing dd-cfDNA to the proportion of donor DNA in white blood cells can differentiate between relapse and the onset of graft-versus-host disease (GVHD). These methods alleviate some of the barriers to the implementation of GTD, which will further widen its clinical application. PMID:28771616

  17. Comparison of Ambient and Atmospheric Pressure Ion Sources for Cystic Fibrosis Exhaled Breath Condensate Ion Mobility-Mass Spectrometry Metabolomics

    NASA Astrophysics Data System (ADS)

    Zang, Xiaoling; Pérez, José J.; Jones, Christina M.; Monge, María Eugenia; McCarty, Nael A.; Stecenko, Arlene A.; Fernández, Facundo M.

    2017-08-01

    Cystic fibrosis (CF) is an autosomal recessive disorder caused by mutations in the gene that encodes the cystic fibrosis transmembrane conductance regulator (CFTR) protein. The vast majority of the mortality is due to progressive lung disease. Targeted and untargeted CF breath metabolomics investigations via exhaled breath condensate (EBC) analyses have the potential to expose metabolic alterations associated with CF pathology and aid in assessing the effectiveness of CF therapies. Here, transmission-mode direct analysis in real time traveling wave ion mobility spectrometry time-of-flight mass spectrometry (TM-DART-TWIMS-TOF MS) was tested as a high-throughput alternative to conventional direct infusion (DI) electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI) methods, and a critical comparison of the three ionization methods was conducted. EBC was chosen as the noninvasive surrogate for airway sampling over expectorated sputum as EBC can be collected in all CF subjects regardless of age and lung disease severity. When using pooled EBC collected from a healthy control, ESI detected the most metabolites, APCI a log order less, and TM-DART the least. TM-DART-TWIMS-TOF MS was used to profile metabolites in EBC samples from five healthy controls and four CF patients, finding that a panel of three discriminant EBC metabolites, some of which had been previously detected by other methods, differentiated these two classes with excellent cross-validated accuracy.

  18. Is cystic fibrosis genetic medicine's canary?

    PubMed

    Lindee, Susan; Mueller, Rebecca

    2011-01-01

    In 1989 the gene that causes cystic fibrosis (CF) was identified in a search accompanied by intense anticipation that the gene, once discovered, would lead rapidly to gene therapy. Many hoped that the disease would effectively disappear. Those affected were going to inhale vectors packed with functioning genes, which would go immediately to work in the lungs. It was a bewitching image, repeatedly invoked in both scientific and popular texts. Gene therapy clinical trials were carried out with a range of strategies and occasionally success seemed close, but by 1996 the idea that gene therapy for CF would quickly provide a cure was being abandoned by the communities engaged with treatment and research. While conventional wisdom holds that the death of Jesse Gelsinger in an unrelated gene therapy trial in 1999 produced new skepticism about gene therapy, the CF story suggests a different trajectory, and some different lessons. This article considers the rise and fall of gene therapy for CF and suggests that CF may provide a particularly compelling case study of a failed genomic technology, perhaps even of a medical "canary." The story of CF might be a kind of warning to us that genetic medicine may create as many problems as it solves, and that moving forward constructively with these techniques and practices requires many kinds of right information, not just about biology, but also about values, priorities, market forces, uncertainty, and consumer choice.

  19. Linked Metadata - lightweight semantics for data integration (Invited)

    NASA Astrophysics Data System (ADS)

    Hendler, J. A.

    2013-12-01

    The "Linked Open Data" cloud (http://linkeddata.org) is currently used to show how the linking of datasets, supported by SPARQL endpoints, is creating a growing set of linked data assets. This linked data space has been growing rapidly, and the last version collected is estimated to have had over 35 billion 'triples.' As impressive as this may sound, there is an inherent flaw in the way the linked data story is conceived. The idea is that all of the data is represented in a linked format (generally RDF) and applications will essentially query this cloud and provide mashup capabilities between the various kinds of data that are found. The view of linking in the cloud is fairly simple -links are provided by either shared URIs or by URIs that are asserted to be owl:sameAs. This view of the linking, which primarily focuses on shared objects and subjects in RDF's subject-predicate-object representation, misses a critical aspect of Semantic Web technology. Given triples such as * A:person1 foaf:knows A:person2 * B:person3 foaf:knows B:person4 * C:person5 foaf:name 'John Doe' this view would not consider them linked (barring other assertions) even though they share a common vocabulary. In fact, we get significant clues that there are commonalities in these data items from the shared namespaces and predicates, even if the traditional 'graph' view of RDF doesn't appear to join on these. Thus, it is the linking of the data descriptions, whether as metadata or other vocabularies, that provides the linking in these cases. This observation is crucial to scientific data integration where the size of the datasets, or even the individual relationships within them, can be quite large. (Note that this is not restricted to scientific data - search engines, social networks, and massive multiuser games also create huge amounts of data.) To convert all the triples into RDF and provide individual links is often unnecessary, and is both time and space intensive. Those looking to do on the fly integration may prefer to do more traditional data queries and then convert and link the 'views' returned at retrieval time, providing another means of using the linked data infrastructure without having to convert whole datasets to triples to provide linking. Web companies have been taking advantage of 'lightweight' semantic metadata for search quality and optimization (cf. schema.org), linking networks within and without web sites (cf. Facebook's Open Graph Protocol), and in doing various kinds of advertisement and user modeling across datasets. Scientific metadata, on the other hand, has traditionally been geared at being largescale and highly descriptive, and scientific ontologies have been aimed at high expressivity, essentially providing complex reasoning services rather than the less expressive vocabularies needed for data discovery and simple mappings that can allow humans (or more complex systems) when full scale integration is needed. Although this work is just the beginning for providing integration, as the community creates more and more datasets, discovery of these data resources on the Web becomes a crucial starting place. Simple descriptors, that can be combined with textual fields and/or common community vocabularies, can be a great starting place on bringing scientific data into the Web of Data that is growing in other communities. References: [1] Pouchard, Line C., et al. "A Linked Science investigation: enhancing climate change data discovery with semantic technologies." Earth science informatics 6.3 (2013): 175-185.

  20. Development and Validation of an Ultradeep Next-Generation Sequencing Assay for Testing of Plasma Cell-Free DNA from Patients with Advanced Cancer.

    PubMed

    Janku, Filip; Zhang, Shile; Waters, Jill; Liu, Li; Huang, Helen J; Subbiah, Vivek; Hong, David S; Karp, Daniel D; Fu, Siqing; Cai, Xuyu; Ramzanali, Nishma M; Madwani, Kiran; Cabrilo, Goran; Andrews, Debra L; Zhao, Yue; Javle, Milind; Kopetz, E Scott; Luthra, Rajyalakshmi; Kim, Hyunsung J; Gnerre, Sante; Satya, Ravi Vijaya; Chuang, Han-Yu; Kruglyak, Kristina M; Toung, Jonathan; Zhao, Chen; Shen, Richard; Heymach, John V; Meric-Bernstam, Funda; Mills, Gordon B; Fan, Jian-Bing; Salathia, Neeraj S

    2017-09-15

    Purpose: Tumor-derived cell-free DNA (cfDNA) in plasma can be used for molecular testing and provide an attractive alternative to tumor tissue. Commonly used PCR-based technologies can test for limited number of alterations at the time. Therefore, novel ultrasensitive technologies capable of testing for a broad spectrum of molecular alterations are needed to further personalized cancer therapy. Experimental Design: We developed a highly sensitive ultradeep next-generation sequencing (NGS) assay using reagents from TruSeqNano library preparation and NexteraRapid Capture target enrichment kits to generate plasma cfDNA sequencing libraries for mutational analysis in 61 cancer-related genes using common bioinformatics tools. The results were retrospectively compared with molecular testing of archival primary or metastatic tumor tissue obtained at different points of clinical care. Results: In a study of 55 patients with advanced cancer, the ultradeep NGS assay detected 82% (complete detection) to 87% (complete and partial detection) of the aberrations identified in discordantly collected corresponding archival tumor tissue. Patients with a low variant allele frequency (VAF) of mutant cfDNA survived longer than those with a high VAF did ( P = 0.018). In patients undergoing systemic therapy, radiological response was positively associated with changes in cfDNA VAF ( P = 0.02), and compared with unchanged/increased mutant cfDNA VAF, decreased cfDNA VAF was associated with longer time to treatment failure (TTF; P = 0.03). Conclusions: Ultradeep NGS assay has good sensitivity compared with conventional clinical mutation testing of archival specimens. A high VAF in mutant cfDNA corresponded with shorter survival. Changes in VAF of mutated cfDNA were associated with TTF. Clin Cancer Res; 23(18); 5648-56. ©2017 AACR . ©2017 American Association for Cancer Research.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamora, D; Moirano, J; Kanal, K

    Purpose: A fundamental measure performed during an annual physics CT evaluation confirms that system displayed CTDIvol nearly matches the independently measured value in phantom. For wide-beam (z-direction) CT scanners, AAPM Report 111 defined an ideal measurement method; however, the method often lacks practicality. The purpose of this preliminary study is to develop a set of conversion factors for a wide-beam CT scanner, relating the CTDIvol measured with a conventional setup (single CTDI phantom) versus the AAPM Report 111 approach (three abutting CTDI phantoms). Methods: For both the body CTDI and head CTDI, two acquisition setups were used: A) conventional singlemore » phantom and B) triple phantom. Of primary concern were the larger nominal beam widths for which a standard CTDI phantom setup would not provide adequate scatter conditions. Nominal beam width (160 or 120 mm) and kVp (100, 120, 140) were modulated based on the underlying clinical protocol. Exposure measurements were taken using a CT pencil ion chamber in the center and 12 o’clock position, and CTDIvol was calculated with ‘nT’ limited to 100 mm. A conversion factor (CF) was calculated as the ratio of CTDIvol measured in setup B versus setup A. Results: For body CTDI, the CF ranged from 1.04 up to 1.10, indicating a 4–10% difference between usage of one and three phantoms. For a nominal beam width of 160 mm, the CF did vary with selected kVp. For head CTDI at nominal beam widths of 120 and 160 mm, the CF was 1.00 and 1.05, respectively, independent of the kVp used (100, 120, and 140). Conclusions: A clear understanding of the manufacturer method of estimating the displayed CTDIvol is important when interpreting annual test results, as the acquisition setup may lead to an error of up to 10%. With appropriately defined CF, single phantom use is feasible.« less

  2. Attempt of correlative observation of morphological synaptic connectivity by combining confocal laser-scanning microscope and FIB-SEM for immunohistochemical staining technique.

    PubMed

    Sonomura, Takahiro; Furuta, Takahiro; Nakatani, Ikuko; Yamamoto, Yo; Honma, Satoru; Kaneko, Takeshi

    2014-11-01

    Ten years have passed since a serial block-face scanning electron microscopy (SBF-SEM) method was developed [1]. In this innovative method, samples were automatically sectioned with an ultramicrotome placed inside a scanning electron microscope column, and the block surfaces were imaged one after another by SEM to capture back-scattered electrons. The contrast-inverted images obtained by the SBF-SEM were very similar to those acquired using conventional TEM. SFB-SEM has made easy to acquire image stacks of the transmission electron microscopy (TEM) in the mesoscale, which is taken with the confocal laser-scanning microcopy(CF-LSM).Furthermore, serial-section SEM has been combined with the focused ion beam (FIB) milling method [2]. FIB-incorporated SEM (FIB-SEM) has enabled the acquisition of three-dimensional images with a higher z-axis resolution com- pared to ultramicrotome-equipped SEM.We tried immunocytochemistry for FIB-SEM and correlated this immunoreactivity with that in CF-LSM. Dendrites of neurons in the rat neostriatum were visualized using a recombinant viral vector. Moreover, the thalamostriatal afferent terminals were immunolabeled with Cy5 fluorescence for vesicular glutamate transporter 2 (VGluT2). After detection of the sites of terminals apposed to the dendrites by using CF-LSM, GFP and VGluT2 immunoreactivities were further developed for EM by using immunogold/silver enhancement and immunoperoxidase/diaminobenzidine (DAB) methods, respectively.We showed that conventional immuno-cytochemical staining for TEM was applicable to FIB-SEM. Furthermore, several synaptic contacts, which were thought to exist on the basis of CF-LSM findings, were confirmed with FIB-SEM, revealing the usefulness of the combined method of CF-LSM and FIB-SEM. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The Planetary Data System (PDS) Data Dictionary Tool (LDDTool)

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2017-10-01

    One of the major design goals of the PDS4 development effort was to provide an avenue for discipline specialists and large data preparers such as mission archivists to extend the core PDS4 Information Model (IM) to include metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity, it is in the best interests of the PDS archive and its users that all extensions to the core IM follow the same design techniques, conventions, and restrictions as the core implementation itself. Notwithstanding, expecting all mission and discipline archivist seeking to define metadata for a new context to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy is unrealistic, to say the least.To bridge that expertise gap, the PDS Engineering Node has developed the data dictionary creation tool known as “LDDTool”. This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his contextual information model using the same, open standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define very sophisticated validation rules.We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  4. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited)

    NASA Astrophysics Data System (ADS)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.

    2011-12-01

    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce efforts by data providers and improve metadata integrity. The result was a recommendation for EOSDIS to develop a 'Common Metadata Repository (CMR)' to manage the evolution of NASA Earth Science metadata in a unified and consistent way by providing a central storage and access capability that streamlines current workflows while increasing overall data quality and anticipating future capabilities. For applications users interested in monitoring and analyzing a wide variety of natural and man-made phenomena, EOSDIS provides access to near real-time products from the MODIS, OMI, AIRS, and MLS instruments in less than 3 hours from observation. To enable interactive exploration of NASA's Earth imagery, EOSDIS is developing a set of standard services to deliver global, full-resolution satellite imagery in a highly responsive manner. EOSDIS is also playing a lead role in the development of the CEOS WGISS Integrated Catalog (CWIC), which provides search and access to holdings of participating international data providers. EOSDIS provides a platform to expose and share information on NASA Earth science tools and data via Earthdata.nasa.gov while offering a coherent and interoperable system for the NASA Earth Science Data System (ESDS) Program.

  5. NASA's Earth Observing Data and Information System - Supporting Interoperability through a Scalable Architecture (Invited)

    NASA Astrophysics Data System (ADS)

    Mitchell, A. E.; Lowe, D. R.; Murphy, K. J.; Ramapriyan, H. K.

    2013-12-01

    Initiated in 1990, NASA's Earth Observing System Data and Information System (EOSDIS) is currently a petabyte-scale archive of data designed to receive, process, distribute and archive several terabytes of science data per day from NASA's Earth science missions. Comprised of 12 discipline specific data centers collocated with centers of science discipline expertise, EOSDIS manages over 6800 data products from many science disciplines and sources. NASA supports global climate change research by providing scalable open application layers to the EOSDIS distributed information framework. This allows many other value-added services to access NASA's vast Earth Science Collection and allows EOSDIS to interoperate with data archives from other domestic and international organizations. EOSDIS is committed to NASA's Data Policy of full and open sharing of Earth science data. As metadata is used in all aspects of NASA's Earth science data lifecycle, EOSDIS provides a spatial and temporal metadata registry and order broker called the EOS Clearing House (ECHO) that allows efficient search and access of cross domain data and services through the Reverb Client and Application Programmer Interfaces (APIs). Another core metadata component of EOSDIS is NASA's Global Change Master Directory (GCMD) which represents more than 25,000 Earth science data set and service descriptions from all over the world, covering subject areas within the Earth and environmental sciences. With inputs from the ECHO, GCMD and Soil Moisture Active Passive (SMAP) mission metadata models, EOSDIS is developing a NASA ISO 19115 Best Practices Convention. Adoption of an international metadata standard enables a far greater level of interoperability among national and international data products. NASA recently concluded a 'Metadata Harmony Study' of EOSDIS metadata capabilities/processes of ECHO and NASA's Global Change Master Directory (GCMD), to evaluate opportunities for improved data access and use, reduce efforts by data providers and improve metadata integrity. The result was a recommendation for EOSDIS to develop a 'Common Metadata Repository (CMR)' to manage the evolution of NASA Earth Science metadata in a unified and consistent way by providing a central storage and access capability that streamlines current workflows while increasing overall data quality and anticipating future capabilities. For applications users interested in monitoring and analyzing a wide variety of natural and man-made phenomena, EOSDIS provides access to near real-time products from the MODIS, OMI, AIRS, and MLS instruments in less than 3 hours from observation. To enable interactive exploration of NASA's Earth imagery, EOSDIS is developing a set of standard services to deliver global, full-resolution satellite imagery in a highly responsive manner. EOSDIS is also playing a lead role in the development of the CEOS WGISS Integrated Catalog (CWIC), which provides search and access to holdings of participating international data providers. EOSDIS provides a platform to expose and share information on NASA Earth science tools and data via Earthdata.nasa.gov while offering a coherent and interoperable system for the NASA Earth Science Data System (ESDS) Program.

  6. Echo™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Dustin Yewell

    Echo™ is a MATLAB-based software package designed for robust and scalable analysis of complex data workflows. An alternative to tedious, error-prone conventional processes, Echo is based on three transformative principles for data analysis: self-describing data, name-based indexing, and dynamic resource allocation. The software takes an object-oriented approach to data analysis, intimately connecting measurement data with associated metadata. Echo operations in an analysis workflow automatically track and merge metadata and computation parameters to provide a complete history of the process used to generate final results, while automated figure and report generation tools eliminate the potential to mislabel those results. History reportingmore » and visualization methods provide straightforward auditability of analysis processes. Furthermore, name-based indexing on metadata greatly improves code readability for analyst collaboration and reduces opportunities for errors to occur. Echo efficiently manages large data sets using a framework that seamlessly allocates resources such that only the necessary computations to produce a given result are executed. Echo provides a versatile and extensible framework, allowing advanced users to add their own tools and data classes tailored to their own specific needs. Applying these transformative principles and powerful features, Echo greatly improves analyst efficiency and quality of results in many application areas.« less

  7. Cell-free DNA detected by "liquid biopsy" as a potential prognostic biomarker in early breast cancer.

    PubMed

    Maltoni, Roberta; Casadio, Valentina; Ravaioli, Sara; Foca, Flavia; Tumedei, Maria Maddalena; Salvi, Samanta; Martignano, Filippo; Calistri, Daniele; Rocca, Andrea; Schirone, Alessio; Amadori, Dino; Bravaccini, Sara

    2017-03-07

    As conventional biomarkers for defining breast cancer (BC) subtypes are not always capable of predicting prognosis, search for new biomarkers which can be easily detected by liquid biopsy is ongoing. It has long been known that cell-free DNA (CF-DNA) could be a promising diagnostic and prognostic marker in different tumor types, although its prognostic value in BC is yet to be confirmed. This retrospective study evaluated the prognostic role of CF-DNA quantity and integrity of HER2, MYC, BCAS1 and PI3KCA, which are frequently altered in BC. We collected 79 serum samples before surgery from women at first diagnosis of BC at Forlì Hospital (Italy) from 2002 to 2010. Twenty-one relapsed and 58 non-relapsed patients were matched by subtype and age. Blood samples were also collected from 10 healthy donors. All samples were analyzed by Real Time PCR for CF-DNA quantity and integrity of all oncogenes. Except for MYC, BC patients showed significantly higher median values of CF-DNA quantity (ng) than healthy controls, who had higher integrity and lower apoptotic index. A difference nearing statistical significance was observed for HER2 short CF-DNA (p = 0.078, AUC value: 0.6305). HER2 short CF-DNA showed an odds ratio of 1.39 for disease recurrence with p = 0.056 (95% CI 0.991-1.973). Our study suggests that CF-DNA detected as liquid biopsy could have great potential in clinical practice once demonstration of its clinical validity and utility has been provided by prospective studies with robust assays.

  8. A practical implementation for a data dictionary in an environment of diverse data sets

    USGS Publications Warehouse

    Sprenger, Karla K.; Larsen, Dana M.

    1993-01-01

    The need for a data dictionary database at the U.S. Geological Survey's EROS Data Center (EDC) was reinforced with the Earth Observing System Data and Information System (EOSDIS) requirement for consistent field definitions of data sets residing at more than one archive center. The EDC requirement addresses the existence of multiple sets with identical field definitions using various naming conventions. The EDC is developing a data dictionary database to accomplish the following foals: to standardize field names for ease in software development; to facilitate querying and updating of the date; and to generate ad hoc reports. The structure of the EDC electronic data dictionary database supports different metadata systems as well as many different data sets. A series of reports is used to keep consistency among data sets and various metadata systems.

  9. Lazy collaborative filtering for data sets with missing values.

    PubMed

    Ren, Yongli; Li, Gang; Zhang, Jun; Zhou, Wanlei

    2013-12-01

    As one of the biggest challenges in research on recommender systems, the data sparsity issue is mainly caused by the fact that users tend to rate a small proportion of items from the huge number of available items. This issue becomes even more problematic for the neighborhood-based collaborative filtering (CF) methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the data sparsity issue in the context of neighborhood-based CF. For a given query (user, item), a set of key ratings is first identified by taking the historical information of both the user and the item into account. Then, an auto-adaptive imputation (AutAI) method is proposed to impute the missing values in the set of key ratings. We present a theoretical analysis to show that the proposed imputation method effectively improves the performance of the conventional neighborhood-based CF methods. The experimental results show that our new method of CF with AutAI outperforms six existing recommendation methods in terms of accuracy.

  10. Genetically and Phenotypically Distinct Pseudomonas aeruginosa Cystic Fibrosis Isolates Share a Core Proteomic Signature

    PubMed Central

    Penesyan, Anahit; Kumar, Sheemal S.; Kamath, Karthik; Shathili, Abdulrahman M.; Venkatakrishnan, Vignesh; Krisp, Christoph; Packer, Nicolle H.; Molloy, Mark P.; Paulsen, Ian T.

    2015-01-01

    The opportunistic pathogen Pseudomonas aeruginosa is among the main colonizers of the lungs of cystic fibrosis (CF) patients. We have isolated and sequenced several P. aeruginosa isolates from the sputum of CF patients and compared them with each other and with the model strain PAO1. Phenotypic analysis of CF isolates showed significant variability in colonization and virulence-related traits suggesting different strategies for adaptation to the CF lung. Genomic analysis indicated these strains shared a large set of core genes with the standard laboratory strain PAO1, and identified the genetic basis for some of the observed phenotypic differences. Proteomics revealed that in a conventional laboratory medium PAO1 expressed 827 proteins that were absent in the CF isolates while the CF isolates shared a distinctive signature set of 703 proteins not detected in PAO1. PAO1 expressed many transporters for the uptake of organic nutrients and relatively few biosynthetic pathways. Conversely, the CF isolates expressed a narrower range of transporters and a broader set of metabolic pathways for the biosynthesis of amino acids, carbohydrates, nucleotides and polyamines. The proteomic data suggests that in a common laboratory medium PAO1 may transport a diverse set of “ready-made” nutrients from the rich medium, whereas the CF isolates may only utilize a limited number of nutrients from the medium relying mainly on their own metabolism for synthesis of essential nutrients. These variations indicate significant differences between the metabolism and physiology of P. aeruginosa CF isolates and PAO1 that cannot be detected at the genome level alone. The widening gap between the increasing genomic data and the lack of phenotypic data means that researchers are increasingly reliant on extrapolating from genomic comparisons using experimentally characterized model organisms such as PAO1. While comparative genomics can provide valuable information, our data suggests that such extrapolations may be fraught with peril. PMID:26431321

  11. Patterned Roughness for Cross-flow Transition Control at Mach 6

    NASA Astrophysics Data System (ADS)

    Arndt, Alexander; Matlis, Eric; Semper, Michael; Corke, Thomas

    2017-11-01

    Experiments are performed to investigate patterned discrete roughness for transition control on a sharp right-circular cone at an angle of attack at Mach 6.0. The approach to transition control is based on exciting less-amplified (subcritical) stationary cross-flow (CF) modes that suppress the growth of the more-amplified (critical) CF modes, and thereby delay transition. The experiments were performed in the Air Force Academy Ludwieg Tube which is a conventional (noisy) design. The cone model is equipped with a motorized 3-D traversing mechanism that mounts on the support sting. The traversing mechanism held a closely-spaced pair of fast-response total pressure Pitot probes. The model utilized a removable tip to exchange between different tip-roughness conditions. Mean flow distortion x-development indicated that the transition Reynolds number increased by 25% with the addition of the subcritical roughness. The energy in traveling disturbances was centered in the band of most amplified traveling CF modes predicted by linear theory. The spatial pattern in the amplitude of the traveling CF modes indicated a nonlinear (sum and difference) interaction between the stationary and traveling CF modes that might explain differences in Retrans between noisy and quiet environments. Air Force Grant FA9550-15-1-0278.

  12. Spectral correction factors for conventional neutron dosemeters used in high-energy neutron environments.

    PubMed

    Lee, K W; Sheu, R J

    2015-04-01

    High-energy neutrons (>10 MeV) contribute substantially to the dose fraction but result in only a small or negligible response in most conventional moderated-type neutron detectors. Neutron dosemeters used for radiation protection purpose are commonly calibrated with (252)Cf neutron sources and are used in various workplace. A workplace-specific correction factor is suggested. In this study, the effect of the neutron spectrum on the accuracy of dose measurements was investigated. A set of neutron spectra representing various neutron environments was selected to study the dose responses of a series of Bonner spheres, including standard and extended-range spheres. By comparing (252)Cf-calibrated dose responses with reference values based on fluence-to-dose conversion coefficients, this paper presents recommendations for neutron field characterisation and appropriate correction factors for responses of conventional neutron dosemeters used in environments with high-energy neutrons. The correction depends on the estimated percentage of high-energy neutrons in the spectrum or the ratio between the measured responses of two Bonner spheres (the 4P6_8 extended-range sphere versus the 6″ standard sphere). © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    NASA Astrophysics Data System (ADS)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  14. Whey protein processing influences formula-induced gut maturation in preterm pigs.

    PubMed

    Li, Yanqi; Østergaard, Mette V; Jiang, Pingping; Chatterton, Dereck E W; Thymann, Thomas; Kvistgaard, Anne S; Sangild, Per T

    2013-12-01

    Immaturity of the gut predisposes preterm infants to nutritional challenges potentially leading to clinical complications such as necrotizing enterocolitis. Feeding milk formulas is associated with greater risk than fresh colostrum or milk, probably due to loss of bioactive proteins (e.g., immunoglobulins, lactoferrin, insulin-like growth factor, transforming growth factor-β) during industrial processing (e.g., pasteurization, filtration, spray-drying). We hypothesized that the processing method for whey protein concentrate (WPC) would affect gut maturation in formula-fed preterm pigs used as a model for preterm infants. Fifty-five caesarean-delivered preterm pigs were distributed into 4 groups given 1 of 4 isoenergetic diets: formula containing conventional WPC (filtration, multi-pasteurization, standard spray-drying) (CF); formula containing gently treated WPC (reduced filtration and pasteurization, gentle spray-drying) (GF); formula containing minimally treated WPC (rennet precipitation, reduced filtration, heat treatment <40°C, freeze-drying) (MF); and bovine colostrum (used as a positive reference group) (BC). Relative to CF, GF, and MF pigs, BC pigs had greater villus heights, lactose digestion, and absorption and lower gut permeability (P < 0.05). MF and BC pigs had greater plasma citrulline concentrations than CF and GF pigs and intestinal interleukin-8 was lower in BC pigs than in the other groups (P < 0.05). MF pigs had lower concentrations of intestinal claudin-4, cleaved caspase-3, and phosphorylated c-Jun than CF pigs (P < 0.05). The conventional and gently treated WPCs had similar efficacy in stimulating proliferation of porcine intestinal epithelial cells. We conclude that processing of WPC affects intestinal structure, function, and integrity when included in formulas for preterm pigs. Optimization of WPC processing technology may be important to preserve the bioactivity and nutritional value of formulas for sensitive newborns.

  15. Modeling, Fabrication and Characterization of Scalable Electroless Gold Plated Nanostructures for Enhanced Surface Plasmon Resonance

    NASA Astrophysics Data System (ADS)

    Jang, Gyoung Gug

    The scientific and industrial demand for controllable thin gold (Au) film and Au nanostructures is increasing in many fields including opto-electronics, photovoltaics, MEMS devices, diagnostics, bio-molecular sensors, spectro-/microscopic surfaces and probes. In this study, a novel continuous flow electroless (CF-EL) Au plating method is developed to fabricate uniform Au thin films in ambient condition. The enhanced local mass transfer rate and continuous deposition resulting from CF-EL plating improved physical uniformity of deposited Au films and thermally transformed nanoparticles (NPs). Au films and NPs exhibited improved optical photoluminescence (PL) and surface plasmon resonance (SPR), respectively, relative to batch immersion EL (BI-EL) plating. Suggested mass transfer models of Au mole deposition are consistent with optical feature of CF-EL and BI-EL films. The prototype CF-EL plating system is upgraded an automated scalable CF-EL plating system with real-time transmission UV-vis (T-UV) spectroscopy which provides the advantage of CF-EL plating, such as more uniform surface morphology, and overcomes the disadvantages of conventional EL plating, such as no continuous process and low deposition rate, using continuous process and controllable deposition rate. Throughout this work, dynamic morphological and chemical transitions during redox-driven self-assembly of Ag and Au film on silica surfaces under kinetic and equilibrium conditions are distinguished by correlating real-time T-UV spectroscopy with X-ray photoelectron spectroscopy (XPS) and scanning electron microscopy (SEM) measurements. The characterization suggests that four previously unrecognized time-dependent physicochemical regimes occur during consecutive EL deposition of silver (Ag) and Au onto tin-sensitized silica surfaces: self-limiting Ag activation; transitory Ag NP formation; transitional Au-Ag alloy formation during galvanic replacement of Ag by Au; and uniform morphology formation under controlled hydraulic conditions. A method to achieve the time-resolved optical profile of EL Au plating was devised and provided a new transitional EL Au film growth model which validated mass transfer model prediction of the deposited thickness of ≤100 nm thin films. As a part of the project, validation of mass transfer model, a spectrophotometric method for quantitative analysis of metal ion is developed that improves the limit of detection comparable to conventional instrumental analysis. The present work suggests that modeling, fabrication and characterization of this novel CF-EL plating method is performed to achieve an ultimate purpose: developing a reliable, inexpensive wet chemical process for controlled metal thin film and nanostructure fabrication.

  16. Dynamic Non-Hierarchical File Systems for Exascale Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Darrell E.; Miller, Ethan L

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in themore » scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search appliances. These search applications are often optimized for a single file system, making it difficult to move files and their metadata between file systems. Users have tried to solve this problem in several ways, including the use of separate databases to index file properties, the encoding of file properties into file names, and separately gathering and managing provenance data, but none of these approaches has worked well, either due to limited usefulness or scalability, or both. Our research addressed several key issues: High-performance, real-time metadata harvesting: extracting important attributes from files dynamically and immediately updating indexes used to improve search; Transparent, automatic, and secure provenance capture: recording the data inputs and processing steps used in the production of each file in the system; Scalable indexing: indexes that are optimized for integration with the file system; Dynamic file system structure: our approach provides dynamic directories similar to those in semantic file systems, but these are the native organization rather than a feature grafted onto a conventional system. In addition to these goals, our research effort will include evaluating the impact of new storage technologies on the file system design and performance. In particular, the indexing and metadata harvesting functions can potentially benefit from the performance improvements promised by new storage class memories.« less

  17. Impacts of supplementing chemical fertilizers with organic fertilizers manufactured using pig manure as a substrate on the spread of tetracycline resistance genes in soil.

    PubMed

    Kang, Yijun; Hao, Yangyang; Shen, Min; Zhao, Qingxin; Li, Qing; Hu, Jian

    2016-08-01

    Using pig manure (PM) compost as a partial substitute for the conventional chemical fertilizers (CFs) is considered an effective approach in sustainable agricultural systems. This study aimed to analyze the impacts of supplementing CF with organic fertilizers (OFs) manufactured using pig manure as a substrate on the spread of tetracycline resistance genes (TRGs) as well as the community structures and diversities of tetracycline-resistant bacteria (TRB) in bulk and cucumber rhizosphere soils. In this study, three organic fertilizers manufactured using the PM as a substrate, namely fresh PM, common OF, and bio-organic fertilizer (BF), were supplemented with a CF. Composted manures combined with a CF did not significantly increase TRB compared with the CF alone, but PM treatment resulted in the long-term survival of TRB in soil. The use of CF+PM also increased the risk of spreading TRGs in soil. As beneficial microorganisms in BF may function as reservoirs for the spread of antibiotic resistance genes, care should be taken when adding them to the OF matrix. The PM treatment significantly altered the community structures and increased the species diversity of TRB, especially in the rhizosphere soil. BF treatment caused insignificant changes in the community structure of TRB compared with CF treatment, yet it reduced the species diversities of TRB in soil. Thus, the partial use of fresh PM as a substitute for CF could increase the risk of spread of TRGs. Apart from plant growth promotion, BF was a promising fertilizer owing to its potential ability to control TRGs. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  19. Lessons Learned From 104 Years of Mobile Observatories

    NASA Astrophysics Data System (ADS)

    Miller, S. P.; Clark, P. D.; Neiswender, C.; Raymond, L.; Rioux, M.; Norton, C.; Detrick, R.; Helly, J.; Sutton, D.; Weatherford, J.

    2007-12-01

    As the oceanographic community ventures into a new era of integrated observatories, it may be helpful to look back on the era of "mobile observatories" to see what Cyberinfrastructure lessons might be learned. For example, SIO has been operating research vessels for 104 years, supporting a wide range of disciplines: marine geology and geophysics, physical oceanography, geochemistry, biology, seismology, ecology, fisheries, and acoustics. In the last 6 years progress has been made with diverse data types, formats and media, resulting in a fully-searchable online SIOExplorer Digital Library of more than 800 cruises (http://SIOExplorer.ucsd.edu). Public access to SIOExplorer is considerable, with 795,351 files (206 GB) downloaded last year. During the last 3 years the efforts have been extended to WHOI, with a "Multi-Institution Testbed for Scalable Digital Archiving" funded by the Library of Congress and NSF (IIS 0455998). The project has created a prototype digital library of data from both institutions, including cruises, Alvin submersible dives, and ROVs. In the process, the team encountered technical and cultural issues that will be facing the observatory community in the near future. Technological Lessons Learned: Shipboard data from multiple institutions are extraordinarily diverse, and provide a good training ground for observatories. Data are gathered from a wide range of authorities, laboratories, servers and media, with little documentation. Conflicting versions exist, generated by alternative processes. Domain- and institution-specific issues were addressed during initial staging. Data files were categorized and metadata harvested with automated procedures. With our second-generation approach to staging, we achieve higher levels of automation with greater use of controlled vocabularies. Database and XML- based procedures deal with the diversity of raw metadata values and map them to agreed-upon standard values, in collaboration with the Marine Metadata Interoperability (MMI) community. All objects are tagged with an expert level, thus serving an educational audience, as well as research users. After staging, publication into the digital library is completely automated. The technical challenges have been largely overcome, thanks to a scalable, federated digital library architecture from the San Diego Supercomputer Center, implemented at SIO, WHOI and other sites. The metadata design is flexible, supporting modular blocks of metadata tailored to the needs of instruments, samples, documents, derived products, cruises or dives, as appropriate. Controlled metadata vocabularies, with content and definitions negotiated by all parties, are critical. Metadata may be mapped to required external standards and formats, as needed. Cultural Lessons Learned: The cultural challenges have been more formidable than expected. They became most apparent during attempts to categorize and stage digital data objects across two institutions, each with their own naming conventions and practices, generally undocumented, and evolving across decades. Whether the questions concerned data ownership, collection techniques, data diversity or institutional practices, the solution involved a joint discussion with scientists, data managers, technicians and archivists, working together. Because metadata discussions go on endlessly, significant benefit comes from dictionaries with definitions of all community-authorized metadata values.

  20. An integrated content and metadata based retrieval system for art.

    PubMed

    Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James

    2004-03-01

    A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.

  1. Non-invasive measurement of liver and pancreas fibrosis in patients with cystic fibrosis.

    PubMed

    Friedrich-Rust, Mireen; Schlueter, Nina; Smaczny, Christina; Eickmeier, Olaf; Rosewich, Martin; Feifel, Kirstin; Herrmann, Eva; Poynard, Thierry; Gleiber, Wolfgang; Lais, Christoph; Zielen, Stefan; Wagner, Thomas O F; Zeuzem, Stefan; Bojunga, Joerg

    2013-09-01

    Patients with cystic fibrosis (CF) have a relevant morbidity and mortality caused by CF-related liver-disease. While transient elastography (TE) is an established elastography method in hepatology centers, Acoustic-Radiation-Force-Impulse (ARFI)-Imaging is a novel ultrasound-based elastography method which is integrated in a conventional ultrasound-system. The aim of the present study was to evaluate the prevalence of liver-fibrosis in patients with CF using TE, ARFI-imaging and fibrosis blood tests. 106 patients with CF were prospectively included in the present study and received ARFI-imaging of the left and right liver-lobe, ARFI of the pancreas TE of the liver and laboratory evaluation. The prevalence of liver-fibrosis according to recently published best practice guidelines for CFLD was 22.6%. Prevalence of significant liver-fibrosis assessed by TE, ARFI-right-liver-lobe, ARFI-left-liver-lobe, Fibrotest, Fibrotest-corrected-by-haptoglobin was 17%, 24%, 40%, 7%, and 16%, respectively. The best agreement was found for TE, ARFI-right-liver-lobe and Fibrotest-corrected-by-haptoglobin. Patients with pancreatic-insufficiency had significantly lower pancreas-ARFI-values as compared to patients without. ARFI-imaging and TE seem to be promising non-invasive methods for detection of liver-fibrosis in patients with CF. Copyright © 2013 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  2. Inorganic Nanoparticle-Modified Poly(Phenylene Sulphide)/ Carbon Fiber Laminates: Thermomechanical Behaviour.

    PubMed

    Díez-Pascual, Ana M; Naffakh, Mohammed

    2013-07-26

    Carbon fiber (CF)-reinforced high-temperature thermoplastics such as poly(phenylene sulphide) (PPS) are widely used in structural composites for aerospace and automotive applications. The porosity of CF-reinforced polymers is a very important topic for practical applications since there is a direct correlation between void content and mechanical properties. In this study, inorganic fullerene-like tungsten disulphide (IF-WS₂) lubricant nanoparticles were used to manufacture PPS/IF-WS₂/CF laminates via melt-blending and hot-press processing, and the effect of IF-WS₂ loading on the quality, thermal and mechanical behaviour of the hybrid composites was investigated. The addition of IF-WS₂ improved fiber impregnation, resulting in lower degree of porosity and increased delamination resistance, compression and flexural properties; their reinforcement effect was greater at temperatures above the glass transition (T g ). IF-WS₂ contents higher than 0.5 wt % increased T g and the heat deflection temperature while reduced the coefficient of thermal expansion. The multiscale laminates exhibited higher ignition point and notably reduced peak heat release rate compared to PPS/CF. The coexistence of micro- and nano-scale fillers resulted in synergistic effects that enhanced the stiffness, strength, thermal conductivity and flame retardancy of the matrix. The results presented herein demonstrate that the IF-WS₂ are very promising nanofillers to improve the thermomechanical properties of conventional thermoplastic/CF composites.

  3. Inorganic Nanoparticle-Modified Poly(Phenylene Sulphide)/Carbon Fiber Laminates: Thermomechanical Behaviour

    PubMed Central

    Díez-Pascual, Ana M.; Naffakh, Mohammed

    2013-01-01

    Carbon fiber (CF)-reinforced high-temperature thermoplastics such as poly(phenylene sulphide) (PPS) are widely used in structural composites for aerospace and automotive applications. The porosity of CF-reinforced polymers is a very important topic for practical applications since there is a direct correlation between void content and mechanical properties. In this study, inorganic fullerene-like tungsten disulphide (IF-WS2) lubricant nanoparticles were used to manufacture PPS/IF-WS2/CF laminates via melt-blending and hot-press processing, and the effect of IF-WS2 loading on the quality, thermal and mechanical behaviour of the hybrid composites was investigated. The addition of IF-WS2 improved fiber impregnation, resulting in lower degree of porosity and increased delamination resistance, compression and flexural properties; their reinforcement effect was greater at temperatures above the glass transition (Tg). IF-WS2 contents higher than 0.5 wt % increased Tg and the heat deflection temperature while reduced the coefficient of thermal expansion. The multiscale laminates exhibited higher ignition point and notably reduced peak heat release rate compared to PPS/CF. The coexistence of micro- and nano-scale fillers resulted in synergistic effects that enhanced the stiffness, strength, thermal conductivity and flame retardancy of the matrix. The results presented herein demonstrate that the IF-WS2 are very promising nanofillers to improve the thermomechanical properties of conventional thermoplastic/CF composites. PMID:28811429

  4. Momentum and velocity of the ablated material in laser machining of carbon fiber preforms

    NASA Astrophysics Data System (ADS)

    Mucha, P.; Speker, N.; Weber, R.; Graf, T.

    2013-11-01

    The automation in fabrication of CFRP (carbon-fiber-reinforced plastics) parts demands efficient and low-cost machining technologies. In conventional cutting technologies, tool-wear and low process speeds are some of the reasons for high costs. Thus, the use of lasers is an attractive option for cutting CF-preforms. A typical effect degrading the quality in laser cutting CF-preform is a bulged cutting edge. This effect is assumed to be caused by interaction of the fibers with the ablated material, which leaves the kerf at high velocity. Hence, a method for measuring the momentum and the velocity of the vapor is presented in this article. To measure the momentum of the ablated material, the CF-preform is mounted on a precision scale while cutting it with a laser. The direction of the momentum was determined by measuring the momentum parallel and orthogonal to the CF-preform surface. A change of the direction of the momentum with different cutting-speeds is assessed at constant laser-power. Averaged velocities of the ablation products of up to 300 m/s were determined by measuring the ablated mass and the momentum.

  5. Plasma treatment of polymer dielectric films to improve capacitive energy storage

    NASA Technical Reports Server (NTRS)

    Yializis, A.; Binder, M.; Mammone, R. J.

    1994-01-01

    Demand for compact instrumentation, portable field equipment, and new electromagnetic weapons is creating a need for new dielectric materials with higher energy storage capabilities. Recognizing the need for higher energy storage capacitors, the Army Research Lab at Fort Monmouth, NJ, initiated a program a year ago to investigate potential methods for increasing the dielectric strength of polyvinylidene difluoride (PVDF) film, which is the highest energy density material commercially available today. Treatment of small area PVDF films in a CF4/O2 plasma showed that the dielectric strength of PVDF films can be increased by as much as 20 percent when treated in a 96 percent CF4/4 percent O2 plasma. This 44 percent increase in energy storage of a PVDF capacitor is significant considering that the treatment can be implemented in a conventional metallizing chamber, with minimum capital investment. The data shows that improved breakdown strength may be unique to PVDF film and the particular CF4/O2 gas mixture, because PVDF film treated with 100 percent CF4, 100 percent O2, Ar gas plasma, and electron irradiation shows no improvement in breakdown strength. Other data presented includes dissipation factor, dielectric constant, and surface tension measurements.

  6. [Dynamic changes of soil microbial populations and enzyme activities in super-high yielding summer maize farmland soil].

    PubMed

    Hou, Peng; Wang, Yong-jun; Wang, Kong-jun; Yang, Jin-sheng; Li, Deng-hai; Dong, Shu-ting; Liu, Jing-guo

    2008-08-01

    To reveal the characteristics of the dynamic changes of soil microbial populations and enzyme activities in super-high yielding ( > 15,000 kg x hm(-2)) summer maize farmland soil, a comparative study was conducted in the experimental fields in National Maize Engineering Research Center (Shandong). On the fields with an annual yield of >15,000 kg x hm(-2) in continuous three years, a plot with the yield of 20 322 kg x hm(-2) (HF) was chosen to make comparison with the conventional farmland (CF) whose maize yield was 8920. 1 kg x hm(-2). The numbers of bacteria, fungi, and actinomycetes as well as the activities of urease and invertase in 0-20 cm soil layer were determined. The results showed that in the growth period of maize, the numbers of bacteria, fungi, and actinomycetes in the two farmland soils increased first and declined then. At the later growth stages of maize, the numbers of soil microbes, especially those of bacteria and actinomycetes, were lower in HF than those in CF. At harvest stage, the ratio of the number of soil bacteria to fungi (B/ F) in HF was 2.03 times higher than that at sowing stage, and 3.02 times higher than that in CF. The B/F in CF had less difference at harvest and sowing stages. The soil urease activity in HF was significantly lower than that in CF at jointing stage, and the invertase activity in HF decreased rapidly after blooming stage, being significantly lower than that in CF.

  7. Performance and welfare of rabbit does in various caging systems.

    PubMed

    Mikó, A; Matics, Zs; Gerencsér, Zs; Odermatt, M; Radnai, I; Nagy, I; Szendrő, K; Szendrő, Zs

    2014-07-01

    The objective of the study was to compare production and welfare of rabbit does and their kits housed in various types of cages. Female rabbits were randomly allocated to four groups with the following cage types: CN: common wire-mesh flat-deck cage, without footrest; CF: cage similar to the CN but with plastic footrest; ECWP: enlarged cage with wire-mesh platform; and ECPP: extra enlarged cage with plastic-mesh platform. All does were inseminated on the same day, 11 days after kindlings. Reproductive performance was evaluated during the first five consecutive kindlings. Severity of sore hocks was scored at each insemination. Location preference of the does and the platform usage of their kits were evaluated. Kindling rate, litter size (total born, born alive, alive at 21 and 35 days) and kit mortality were not significantly influenced by the cage types. The litter weight at 21 days was higher in ECWP and ECPP cages than in the CF group (3516, 3576 and 3291 g, respectively; P2.5 cm) and 3 to 4 (3=callus opened, cracks present; 4=wounds) were 58%, 60%, 78% and 48%, and 0%, 5%, 0% and 48% in groups ECPP, ECWP, CF and CN, respectively. Higher number of daily nest visits was observed for CF does than for ECWP does (12.5 v. 5.9; P2/day) was higher in the CF group than in the ECWP group (12.1 v. 3.2%; P<0.01). Within large cages, the does were observed on the platform more frequently in the ECPP cages compared with the ECWP cages (56.9% v. 31.7%; P<0.001). Similarly, 2.7% and 0.2% of kits at 21 days of age, and 33.2% and 5.2% of kits at 28 days of age, were found on the platforms of ECPP and ECWP cages, respectively. In conclusion, cages larger than the conventional ones improved kits' weaning weight, plastic footrests and plastic-mesh platforms in conventional and/or large cages reduced sore hocks' problems, plastic-mesh platforms were more used by both does and kits compared with the wire-mesh platforms.

  8. Acute and Short-term Toxic Effects of Conventionally Fractionated vs Hypofractionated Whole-Breast Irradiation: A Randomized Clinical Trial.

    PubMed

    Shaitelman, Simona F; Schlembach, Pamela J; Arzu, Isidora; Ballo, Matthew; Bloom, Elizabeth S; Buchholz, Daniel; Chronowski, Gregory M; Dvorak, Tomas; Grade, Emily; Hoffman, Karen E; Kelly, Patrick; Ludwig, Michelle; Perkins, George H; Reed, Valerie; Shah, Shalin; Stauder, Michael C; Strom, Eric A; Tereffe, Welela; Woodward, Wendy A; Ensor, Joe; Baumann, Donald; Thompson, Alastair M; Amaya, Diana; Davis, Tanisha; Guerra, William; Hamblin, Lois; Hortobagyi, Gabriel; Hunt, Kelly K; Buchholz, Thomas A; Smith, Benjamin D

    2015-10-01

    The most appropriate dose fractionation for whole-breast irradiation (WBI) remains uncertain. To assess acute and 6-month toxic effects and quality of life (QOL) with conventionally fractionated WBI (CF-WBI) vs hypofractionated WBI (HF-WBI). Unblinded randomized trial of CF-WBI (n = 149; 50.00 Gy/25 fractions + boost [10.00-14.00 Gy/5-7 fractions]) vs HF-WBI (n = 138; 42.56 Gy/16 fractions + boost [10.00-12.50 Gy/4-5 fractions]) following breast-conserving surgery administered in community-based and academic cancer centers to 287 women 40 years or older with stage 0 to II breast cancer for whom WBI without addition of a third field was recommended; 76% of study participants (n = 217) were overweight or obese. Patients were enrolled from February 2011 through February 2014 and observed for a minimum of 6 months. Administration of CF-WBI or HF-WBI. Physician-reported acute and 6-month toxic effects using National Cancer Institute Common Toxicity Criteria, and patient-reported QOL using the Functional Assessment of Cancer Therapy for Patients with Breast Cancer (FACT-B). All analyses were intention to treat, with outcomes compared using the χ2 test, Cochran-Armitage test, and ordinal logistic regression. Of 287 participants, 149 were randomized to CF-WBI and 138 to HF-WBI. Treatment arms were well matched for baseline characteristics, including FACT-B total score (HF-WBI, 120.1 vs CF-WBI, 118.8; P = .46) and individual QOL items such as somewhat or more lack of energy (HF-WBI, 38% vs CF-WBI, 39%; P = .86) and somewhat or more trouble meeting family needs (HF-WBI, 10% vs CF-WBI, 14%; P = .54). Maximum physician-reported acute dermatitis (36% vs 69%; P < .001), pruritus (54% vs 81%; P < .001), breast pain (55% vs 74%; P = .001), hyperpigmentation (9% vs 20%; P = .002), and fatigue (9% vs 17%; P = .02) during irradiation were lower in patients randomized to HF-WBI. The rate of overall grade 2 or higher acute toxic effects was less with HF-WBI than with CF-WBI (47% vs 78%; P < .001). Six months after irradiation, physicians reported less fatigue in patients randomized to HF-WBI (0% vs 6%; P = .01), and patients randomized to HF-WBI reported less lack of energy (23% vs 39%; P < .001) and less trouble meeting family needs (3% vs 9%; P = .01). Multivariable regression confirmed the superiority of HF-WBI in terms of patient-reported lack of energy (odds ratio [OR], 0.39; 95% CI, 0.24-0.63) and trouble meeting family needs (OR, 0.34; 95% CI, 0.16-0.75). Treatment with HF-WBI appears to yield lower rates of acute toxic effects than CF-WBI as well as less fatigue and less trouble meeting family needs 6 months after completing radiation therapy. These findings should be communicated to patients as part of shared decision making. clinicaltrials.gov Identifier: NCT01266642.

  9. Nutrient source and tillage influences on nitrogen availability in a Southern Piedmont corn cropping system

    USDA-ARS?s Scientific Manuscript database

    Combinations of conservation tillage and poultry litter (PL) can increase crop production in southeastern USA soils compared to conventional tillage (CT) and chemical fertilizer (CF). The reason for the beneficial response is usually attributed to improved water and nutrient availability. We evaluat...

  10. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  11. Metadata for Web Resources: How Metadata Works on the Web.

    ERIC Educational Resources Information Center

    Dillon, Martin

    This paper discusses bibliographic control of knowledge resources on the World Wide Web. The first section sets the context of the inquiry. The second section covers the following topics related to metadata: (1) definitions of metadata, including metadata as tags and as descriptors; (2) metadata on the Web, including general metadata systems,…

  12. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    ERIC Educational Resources Information Center

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  13. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  14. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  15. Reverse line blot hybridisation screening of Pseudallescheria/Scedosporium species in patients with cystic fibrosis.

    PubMed

    Lu, Q; van den Ende, A H G Gerrits; de Hoog, G S; Li, R; Accoceberry, I; Durand-Joly, I; Bouchara, J-P; Hernandez, F; Delhaes, L

    2011-10-01

    The PCR-RLB (reverse line blot hybridisation) was applied as a molecular technique for the detection of members of Pseudallescheria and Scedosporium from sputum of patients with cystic fibrosis (CF). Fifty-nine sputum samples were collected from 52 CF patients, which were analysed by culture and PCR-RLB. Conventional and semi-selective culture yielded five positive samples, but the PCR-RLB hybridisation assay permitted the detection of members of Pseudallescheria/Scedosporium in 32 out of 52 patients (61.5%). In total, PCR-RLB yielded 47 positives. Pseudallescheria apiosperma was detected in 20 samples, while Pseudallescheria boydii and Pseudallescheria aurantiacum were detected in 17 and eight samples, respectively. Six samples gave a positive reaction with two distinct species-specific probes and one sample with three probes. In conclusion, the PCR-RLB assay described in this study allows the detection of Scedosporium spp. in CF sputum samples and the identification of Pseudallescheria apiosperma, P. boydii, S. aurantiacum, Scedosporium prolificans and Pseudallescheria minutispora. © 2011 Blackwell Verlag GmbH.

  16. Harvesting NASA's Common Metadata Repository (CMR)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Durbin, Chris; Norton, James; Mitchell, Andrew

    2017-01-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  17. Harvesting NASA's Common Metadata Repository

    NASA Astrophysics Data System (ADS)

    Shum, D.; Mitchell, A. E.; Durbin, C.; Norton, J.

    2017-12-01

    As part of NASA's Earth Observing System Data and Information System (EOSDIS), the Common Metadata Repository (CMR) stores metadata for over 30,000 datasets from both NASA and international providers along with over 300M granules. This metadata enables sub-second discovery and facilitates data access. While the CMR offers a robust temporal, spatial and keyword search functionality to the general public and international community, it is sometimes more desirable for international partners to harvest the CMR metadata and merge the CMR metadata into a partner's existing metadata repository. This poster will focus on best practices to follow when harvesting CMR metadata to ensure that any changes made to the CMR can also be updated in a partner's own repository. Additionally, since each partner has distinct metadata formats they are able to consume, the best practices will also include guidance on retrieving the metadata in the desired metadata format using CMR's Unified Metadata Model translation software.

  18. Novel Li[(CF3SO2)(n-C4F9SO2)N]-Based Polymer Electrolytes for Solid-State Lithium Batteries with Superior Electrochemical Performance.

    PubMed

    Ma, Qiang; Qi, Xingguo; Tong, Bo; Zheng, Yuheng; Feng, Wenfang; Nie, Jin; Hu, Yong-Sheng; Li, Hong; Huang, Xuejie; Chen, Liquan; Zhou, Zhibin

    2016-11-02

    Solid polymer electrolytes (SPEs) would be promising candidates for application in high-energy rechargeable lithium (Li) batteries to replace the conventional organic liquid electrolytes, in terms of the enhanced safety and excellent design flexibility. Herein, we first report novel perfluorinated sulfonimide salt-based SPEs, composed of lithium (trifluoromethanesulfonyl)(n-nonafluorobutanesulfonyl)imide (Li[(CF 3 SO 2 )(n-C 4 F 9 SO 2 )N], LiTNFSI) and poly(ethylene oxide) (PEO), which exhibit relatively efficient ionic conductivity (e.g., 1.04 × 10 -4 S cm -1 at 60 °C and 3.69 × 10 -4 S cm -1 at 90 °C) and enough thermal stability (>350 °C), for rechargeable Li batteries. More importantly, the LiTNFSI-based SPEs could not only deliver the excellent interfacial compatibility with electrodes (e.g., Li-metal anode, LiFePO 4 and sulfur composite cathodes), but also afford good cycling performances for the Li|LiFePO 4 (>300 cycles at 1C) and Li-S cells (>500 cycles at 0.5C), in comparison with the conventional LiTFSI (Li[(CF 3 SO 2 ) 2 N])-based SPEs. The interfacial impedance and morphology of the cycled Li-metal electrodes are also comparatively analyzed by electrochemical impedance spectra and scanning electron microscopy, respectively. These indicate that the LiTNFSI-based SPEs would be potential alternatives for application in high-energy solid-state Li batteries.

  19. Simplified Metadata Curation via the Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Pilone, D.

    2015-12-01

    The Metadata Management Tool (MMT) is the newest capability developed as part of NASA Earth Observing System Data and Information System's (EOSDIS) efforts to simplify metadata creation and improve metadata quality. The MMT was developed via an agile methodology, taking into account inputs from GCMD's science coordinators and other end-users. In its initial release, the MMT uses the Unified Metadata Model for Collections (UMM-C) to allow metadata providers to easily create and update collection records in the ISO-19115 format. Through a simplified UI experience, metadata curators can create and edit collections without full knowledge of the NASA Best Practices implementation of ISO-19115 format, while still generating compliant metadata. More experienced users are also able to access raw metadata to build more complex records as needed. In future releases, the MMT will build upon recent work done in the community to assess metadata quality and compliance with a variety of standards through application of metadata rubrics. The tool will provide users with clear guidance as to how to easily change their metadata in order to improve their quality and compliance. Through these features, the MMT allows data providers to create and maintain compliant and high quality metadata in a short amount of time.

  20. Enriched Video Semantic Metadata: Authorization, Integration, and Presentation.

    ERIC Educational Resources Information Center

    Mu, Xiangming; Marchionini, Gary

    2003-01-01

    Presents an enriched video metadata framework including video authorization using the Video Annotation and Summarization Tool (VAST)-a video metadata authorization system that integrates both semantic and visual metadata-- metadata integration, and user level applications. Results demonstrated that the enriched metadata were seamlessly…

  1. Non-fluoroscopic navigation systems for radiofrequency catheter ablation for supraventricular tachycardia reduce ionising radiation exposure.

    PubMed

    See, Jason; Amora, Jonah L; Lee, Sheldon; Lim, Paul; Teo, Wee Siong; Tan, Boon Yew; Ho, Kah Leng; Lee, Chee Wan; Ching, Chi-Keong

    2016-07-01

    The use of non-fluoroscopic systems (NFS) to guide radiofrequency catheter ablation (RFCA) for the treatment of supraventricular tachycardia (SVT) is associated with lower radiation exposure. This study aimed to determine if NFS reduces fluoroscopy time, radiation dose and procedure time. We prospectively enrolled patients undergoing RFCA for SVT. NFS included EnSiteTM NavXTM or CARTO® mapping. We compared procedure and fluoroscopy times, and radiation exposure between NFS and conventional fluoroscopy (CF) cohorts. Procedural success, complications and one-year success rates were reported. A total of 200 patients over 27 months were included and RFCA was guided by NFS for 79 patients; those with atrioventricular nodal reentrant tachycardia (AVNRT), left-sided atrioventricular reentrant tachycardia (AVRT) and right-sided AVRT were included (n = 101, 63 and 36, respectively). Fluoroscopy times were significantly lower with NFS than with CF (10.8 ± 11.1 minutes vs. 32.0 ± 27.5 minutes; p < 0.001). The mean fluoroscopic dose area product was also significantly reduced with NFS (NSF: 5,382 ± 5,768 mGy*cm2 vs. CF: 21,070 ± 23,311 mGy*cm2; p < 0.001); for all SVT subtypes. There was no significant reduction in procedure time, except for left-sided AVRT ablation (NFS: 79.2 minutes vs. CF: 116.4 minutes; p = 0.001). Procedural success rates were comparable (NFS: 97.5% vs. CF: 98.3%) and at one-year follow-up, there was no significant difference in the recurrence rates (NFS: 5.2% vs. CF: 4.2%). No clinically significant complications were observed in both groups. The use of NFS for RFCA for SVT is safe, with significantly reduced radiation dose and fluoroscopy time. Copyright © Singapore Medical Association.

  2. A feasibility study of [sup 252]Cf neutron brachytherapy, cisplatin + 5-FU chemo-adjuvant and accelerated hyperfractionated radiotherapy for advanced cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murayama, Y.; Wierzbicki, J.; Bowen, M.G.

    The purpose was to evaluate the feasibility and toxicity of [sup 252]Cf neutron brachytherapy combined with hyperaccelerated chemoradiotherapy for Stage III and IV cervical cancers. Eleven patients with advanced Stage IIIB-IVA cervical cancers were treated with [sup 252]Cf neutron brachytherapy in an up-front schedule followed by cisplatin (CDDP; 50 mg/m[sup 2]) chemotherapy and hyperfractionated accelerated (1.2 Gy bid) radiotherapy given concurrently with intravenous infusion of 5-Fluorouracil (5-FU) (1000 mg/m[sup 2]/day [times] 4 days) in weeks 1 and 4 with conventional radiation (weeks 2, 3, 5, and 6). Total dose at a paracervical point A isodose surface was 80-85 Gy-eq bymore » external and intracavitary therapy and 60 Gy at the pelvic sidewalls. Patients tolerated the protocol well. There was 91% compliance with the chemotherapy and full compliance with the [sup 252]Cf brachytherapy and the external beam radiotherapy. There were no problems with acute chemo or radiation toxicity. One patient developed a rectovaginal fistula (Grade 3-4 RTOG criteria) but no other patients developed significant late cystitis, proctitis or enteritis. There was complete response (CR) observed in all cases. With mean follow-up to 26 months, local control has been achieved with 90% actuarial 3-year survival with no evidence of disease (NED). [sup 252]Cf neutrons can be combined with cisplatin and 5-FU infusion chemotherapy plus hyperaccelerated chemoradiotherapy without unusual side effects or toxicity and with a high local response and tumor control rate. Further study of [sup 252]Cf neutron-chemoradiotherapy for advanced and bulky cervical cancer are indicated. The authors found chemotherapy was more effective with the improved local tumor control. 18 refs., 2 tabs.« less

  3. Growth retardation and reduced growth hormone secretion in cystic fibrosis. Clinical observations from three CF centers.

    PubMed

    Ciro, D'Orazio; Padoan, Rita; Blau, Hannah; Marostica, Anna; Fuoti, Maurizio; Volpi, Sonia; Pilotta, Alba; Meyerovitch, Joseph; Sher, Daniel; Assael, Baroukh M

    2013-03-01

    Growth delay in cystic fibrosis is frequent and is usually the result of several interacting causes. It most often derives from severe respiratory impairment and severe malabsorption. There are however patients whose clinical condition is not severe enough to be held accountable for this phenomenon. We aimed at describing patients who showed growth delay, who were not affected by severe pulmonary disease or malabsorption and who, when tested, showed a reduced GH secretion after stimulation with conventional agents. We noticed a disproportionately large prevalence of growth hormone (GH) release deficit (GHRD) in pediatric cystic fibrosis (CF) patients. We examined all patients under our care in the period 2006-11, who were older than 5 and younger than 16 years old. We focussed on those who fell below the 3rd height percentile, or whose growth during the previous 18 months faltered by >2SD, and who did not present clinical conditions that could reasonably explain their failure to thrive. These patients were subjected to standard GH provocative tests. Out of 285 who matched the age criterion, 33 patients also matched the height percentile criterion. While 15/33 suffered clinical conditions that could reasonably explain their failure to thrive, 18/33 underwent GH release provocative tests and 12/18 showed a release deficit. We conclude that impaired GH secretion is more frequent among CF patients compared to the prevalence of GH deficiency in the general population and that GH release impairment may be an independent cause of growth delay in CF. Our findings are in agreement with recent studies that have described low GH levels in CF piglets and in neonates with CF [1]. Copyright © 2012 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  4. A data delivery system for IMOS, the Australian Integrated Marine Observing System

    NASA Astrophysics Data System (ADS)

    Proctor, R.; Roberts, K.; Ward, B. J.

    2010-09-01

    The Integrated Marine Observing System (IMOS, www.imos.org.au), an AUD 150 m 7-year project (2007-2013), is a distributed set of equipment and data-information services which, among many applications, collectively contribute to meeting the needs of marine climate research in Australia. The observing system provides data in the open oceans around Australia out to a few thousand kilometres as well as the coastal oceans through 11 facilities which effectively observe and measure the 4-dimensional ocean variability, and the physical and biological response of coastal and shelf seas around Australia. Through a national science rationale IMOS is organized as five regional nodes (Western Australia - WAIMOS, South Australian - SAIMOS, Tasmania - TASIMOS, New SouthWales - NSWIMOS and Queensland - QIMOS) surrounded by an oceanic node (Blue Water and Climate). Operationally IMOS is organized as 11 facilities (Argo Australia, Ships of Opportunity, Southern Ocean Automated Time Series Observations, Australian National Facility for Ocean Gliders, Autonomous Underwater Vehicle Facility, Australian National Mooring Network, Australian Coastal Ocean Radar Network, Australian Acoustic Tagging and Monitoring System, Facility for Automated Intelligent Monitoring of Marine Systems, eMarine Information Infrastructure and Satellite Remote Sensing) delivering data. IMOS data is freely available to the public. The data, a combination of near real-time and delayed mode, are made available to researchers through the electronic Marine Information Infrastructure (eMII). eMII utilises the Australian Academic Research Network (AARNET) to support a distributed database on OPeNDAP/THREDDS servers hosted by regional computing centres. IMOS instruments are described through the OGC Specification SensorML and where-ever possible data is in CF compliant netCDF format. Metadata, conforming to standard ISO 19115, is automatically harvested from the netCDF files and the metadata records catalogued in the OGC GeoNetwork Metadata Entry and Search Tool (MEST). Data discovery, access and download occur via web services through the IMOS Ocean Portal (http://imos.aodn.org.au) and tools for the display and integration of near real-time data are in development.

  5. Pedicle screw anchorage of carbon fiber-reinforced PEEK screws under cyclic loading.

    PubMed

    Lindtner, Richard A; Schmid, Rene; Nydegger, Thomas; Konschake, Marko; Schmoelz, Werner

    2018-03-01

    Pedicle screw loosening is a common and significant complication after posterior spinal instrumentation, particularly in osteoporosis. Radiolucent carbon fiber-reinforced polyetheretherketone (CF/PEEK) pedicle screws have been developed recently to overcome drawbacks of conventional metallic screws, such as metal-induced imaging artifacts and interference with postoperative radiotherapy. Beyond radiolucency, CF/PEEK may also be advantageous over standard titanium in terms of pedicle screw loosening due to its unique material properties. However, screw anchorage and loosening of CF/PEEK pedicle screws have not been evaluated yet. The aim of this biomechanical study therefore was to evaluate whether the use of this alternative nonmetallic pedicle screw material affects screw loosening. The hypotheses tested were that (1) nonmetallic CF/PEEK pedicle screws resist an equal or higher number of load cycles until loosening than standard titanium screws and that (2) PMMA cement augmentation further increases the number of load cycles until loosening of CF/PEEK screws. In the first part of the study, left and right pedicles of ten cadaveric lumbar vertebrae (BMD 70.8 mg/cm 3  ± 14.5) were randomly instrumented with either CF/PEEK or standard titanium pedicle screws. In the second part, left and right pedicles of ten vertebrae (BMD 56.3 mg/cm 3  ± 15.8) were randomly instrumented with either PMMA-augmented or nonaugmented CF/PEEK pedicle screws. Each pedicle screw was subjected to cyclic cranio-caudal loading (initial load ranging from - 50 N to + 50 N) with stepwise increasing compressive loads (5 N every 100 cycles) until loosening or a maximum of 10,000 cycles. Angular screw motion ("screw toggling") within the vertebra was measured with a 3D motion analysis system every 100 cycles and by stress fluoroscopy every 500 cycles. The nonmetallic CF/PEEK pedicle screws resisted a similar number of load cycles until loosening as the contralateral standard titanium screws (3701 ± 1228 vs. 3751 ± 1614 load cycles, p = 0.89). PMMA cement augmentation of CF/PEEK pedicle screws furthermore significantly increased the mean number of load cycles until loosening by 1.63-fold (5100 ± 1933 in augmented vs. 3130 ± 2132 in nonaugmented CF/PEEK screws, p = 0.015). In addition, angular screw motion assessed by stress fluoroscopy was significantly smaller in augmented than in nonaugmented CF/PEEK screws before as well as after failure. Using nonmetallic CF/PEEK instead of standard titanium as pedicle screw material did not affect screw loosening in the chosen test setup, whereas cement augmentation enhanced screw anchorage of CF/PEEK screws. While comparable to titanium screws in terms of screw loosening, radiolucent CF/PEEK pedicle screws offer the significant advantage of not interfering with postoperative imaging and radiotherapy. These slides can be retrieved under Electronic Supplementary Material.

  6. Towards efficient data exchange and sharing for big-data driven materials science: metadata and data formats

    NASA Astrophysics Data System (ADS)

    Ghiringhelli, Luca M.; Carbogno, Christian; Levchenko, Sergey; Mohamed, Fawzi; Huhs, Georg; Lüders, Martin; Oliveira, Micael; Scheffler, Matthias

    2017-11-01

    With big-data driven materials research, the new paradigm of materials science, sharing and wide accessibility of data are becoming crucial aspects. Obviously, a prerequisite for data exchange and big-data analytics is standardization, which means using consistent and unique conventions for, e.g., units, zero base lines, and file formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community, which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops "converters" for the input and output files of all important codes. These converters then translate the data of each code into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. In this perspective paper, we present both strategies and argue that they can and should be regarded as complementary, if not even synergetic. The represented appropriate format and conventions were agreed upon by two teams, the Electronic Structure Library (ESL) of the European Center for Atomic and Molecular Computations (CECAM) and the NOvel MAterials Discovery (NOMAD) Laboratory, a European Centre of Excellence (CoE). A key element of this work is the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.

  7. Hemoglobin A1c levels and aortic arterial stiffness: the Cardiometabolic Risk in Chinese (CRC) study.

    PubMed

    Liang, Jun; Zhou, Na; Teng, Fei; Zou, Caiyan; Xue, Ying; Yang, Manqing; Song, Huaidong; Qi, Lu

    2012-01-01

    The American Diabetes Association (ADA) recently published new clinical guidelines in which hemoglobin A1c (HbA1c) was recommended as a diagnostic test for diabetes. The present study was to investigate the association between HbA1c and cardiovascular risk, and compare the associations with fasting glucose and 2-hour oral glucose tolerance test (2 h OGTT). The study samples are from a community-based health examination survey in central China. Carotid-to-femoral pulse wave velocity (cfPWV) and HbA1c were measured in 5,098 men and women. After adjustment for age, sex, and BMI, the levels of HbA1c were significantly associated with an increasing trend of cfPWV in a dose-dependent fashion (P for trend <0.0001). The associations remained significant after further adjustment for blood pressure, heart rate, and lipids (P = 0.004), and the difference in cfPWV between the highest and the lowest quintiles of HbA1c was 0.31 m/s. Fasting glucose and 2 h OGTT were not associated with cfPWV in the multivariate analyses. HbA1c showed additive effects with fasting glucose or 2 h OGTT on cfPWV. In addition, age and blood pressure significantly modified the associations between HbA1c and cfPWV (P for interactions <0.0001 for age; and  = 0.019 for blood pressure). The associations were stronger in subjects who were older (≥60 y; P for trend = 0.004) and had higher blood pressure (≥120 [systolic blood pressure]/80 mmHg [diastolic blood pressure]; P for trend = 0.028) than those who were younger and had lower blood pressure (P for trend >0.05). HbA1c was related to high cfPWV, independent of conventional cardiovascular risk factors. Senior age and high blood pressure might amplify the adverse effects of HbA1c on cardiovascular risk.

  8. From conventionally fractionated radiation therapy to hyperfractionated radiation therapy alone and with concurrent chemotherapy in patients with early-stage nonsmall cell lung cancer.

    PubMed

    Jeremić, Branislav; Milicić, Biljana

    2008-02-15

    The authors' single-institution experience in patients with early-stage (I and II) nonsmall cell lung cancer (NSCLC) who were treated between 1980 and 1998 with either conventionally fractionated (CF) radiation therapy (RT), or hyperfractionated (HFX) RT, or HFX RT with concurrent paclitaxel/carboplatin (HFX RT-Pac/C) was reviewed. Seventy-eight patients received 60 grays (Gy) in 30 daily fractions (CF), 116 patients received 69.6 Gy (1.2 Gy twice daily), and 56 patients received 67.6 Gy (1.3 Gy twice daily) with concurrent, low-dose, daily C (25 mg/m2) and Pac (10 mg/m2). Biologically equivalent doses for the 3 groups were 72 Gy, 78 Gy, and 76 Gy, respectively, for acute effects (alpha/beta = 10 Gy) and 120 Gy, 111 Gy, and 111 Gy, respectively, for late effects (alpha/beta = 2 Gy). For all 250 patients, the overall median survival was 27 months, the cause-specific survival was 27 months, the local progression-free survival was 32 months, and distant metastasis-free survival was not achieved; and the respective 5-year survival rates were 27%, 32%, 45%, and 68%. CF achieved significantly inferior survival than either HFX RT alone or HFX RT-Pac/C (P = .0332 and P = .0013, respectively), and no difference was observed between the 2 HFX RT regimens (P = .1934). Only acute hematologic high-grade toxicity (grade >or=3) was more frequent with HFX RT-Pac/C than with either RT alone, whereas other toxicities were similar between the 3 treatment groups. HFX RT with or without concurrent chemotherapy may be better than CF in patients with early-stage NSCLC. The role of chemotherapy deserves further investigation, because the group that received chemotherapy in the current study had a higher incidence of acute high-grade hematologic toxicity. Cancer 2008. (c) 2008 American Cancer Society.

  9. Combination of Alpha-Melanocyte Stimulating Hormone with Conventional Antibiotics against Methicillin Resistant Staphylococcus aureus

    PubMed Central

    Singh, Madhuri; Gadepalli, Ravisekhar; Dhawan, Benu; Mukhopadhyay, Kasturi

    2013-01-01

    Our previous studies revealed that alpha-melanocyte stimulating hormone (α-MSH) is strongly active against Staphylococcus aureus (S. aureus) including methicillin resistant S. aureus (MRSA). Killing due to α-MSH occurred by perturbation of the bacterial membrane. In the present study, we investigated the in vitro synergistic potential of α-MSH with five selected conventional antibiotics viz., oxacillin (OX), ciprofloxacin (CF), tetracycline (TC), gentamicin (GM) and rifampicin (RF) against a clinical MRSA strain which carried a type III staphylococcal cassette chromosome mec (SCCmec) element and belonged to the sequence type (ST) 239. The strain was found to be highly resistant to OX (minimum inhibitory concentration (MIC) = 1024 µg/ml) as well as to other selected antimicrobial agents including α-MSH. The possibility of the existence of intracellular target sites of α-MSH was evaluated by examining the DNA, RNA and protein synthesis pathways. We observed a synergistic potential of α-MSH with GM, CF and TC. Remarkably, the supplementation of α-MSH with GM, CF and TC resulted in ≥64-, 8- and 4-fold reductions in their minimum bactericidal concentrations (MBCs), respectively. Apart from membrane perturbation, in this study we found that α-MSH inhibited ∼53% and ∼47% DNA and protein synthesis, respectively, but not RNA synthesis. Thus, the mechanistic analogy between α-MSH and CF or GM or TC appears to be the reason for the observed synergy between them. In contrast, α-MSH did not act synergistically with RF which may be due to its inability to inhibit RNA synthesis (<10%). Nevertheless, the combination of α-MSH with RF and OX showed an enhanced killing by ∼45% and ∼70%, respectively, perhaps due to the membrane disrupting properties of α-MSH. The synergistic activity of α-MSH with antibiotics is encouraging, and promises to restore the lost potency of discarded antibiotics. PMID:24040081

  10. Assessing Metadata Quality of a Federally Sponsored Health Data Repository.

    PubMed

    Marc, David T; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn't frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality.

  11. Assessing Metadata Quality of a Federally Sponsored Health Data Repository

    PubMed Central

    Marc, David T.; Beattie, James; Herasevich, Vitaly; Gatewood, Laël; Zhang, Rui

    2016-01-01

    The U.S. Federal Government developed HealthData.gov to disseminate healthcare datasets to the public. Metadata is provided for each datasets and is the sole source of information to find and retrieve data. This study employed automated quality assessments of the HealthData.gov metadata published from 2012 to 2014 to measure completeness, accuracy, and consistency of applying standards. The results demonstrated that metadata published in earlier years had lower completeness, accuracy, and consistency. Also, metadata that underwent modifications following their original creation were of higher quality. HealthData.gov did not uniformly apply Dublin Core Metadata Initiative to the metadata, which is a widely accepted metadata standard. These findings suggested that the HealthData.gov metadata suffered from quality issues, particularly related to information that wasn’t frequently updated. The results supported the need for policies to standardize metadata and contributed to the development of automated measures of metadata quality. PMID:28269883

  12. FIR: An Effective Scheme for Extracting Useful Metadata from Social Media.

    PubMed

    Chen, Long-Sheng; Lin, Zue-Cheng; Chang, Jing-Rong

    2015-11-01

    Recently, the use of social media for health information exchange is expanding among patients, physicians, and other health care professionals. In medical areas, social media allows non-experts to access, interpret, and generate medical information for their own care and the care of others. Researchers paid much attention on social media in medical educations, patient-pharmacist communications, adverse drug reactions detection, impacts of social media on medicine and healthcare, and so on. However, relatively few papers discuss how to extract useful knowledge from a huge amount of textual comments in social media effectively. Therefore, this study aims to propose a Fuzzy adaptive resonance theory network based Information Retrieval (FIR) scheme by combining Fuzzy adaptive resonance theory (ART) network, Latent Semantic Indexing (LSI), and association rules (AR) discovery to extract knowledge from social media. In our FIR scheme, Fuzzy ART network firstly has been employed to segment comments. Next, for each customer segment, we use LSI technique to retrieve important keywords. Then, in order to make the extracted keywords understandable, association rules mining is presented to organize these extracted keywords to build metadata. These extracted useful voices of customers will be transformed into design needs by using Quality Function Deployment (QFD) for further decision making. Unlike conventional information retrieval techniques which acquire too many keywords to get key points, our FIR scheme can extract understandable metadata from social media.

  13. Weekly observations of online survey metadata obtained through home computer use allow for detection of changes in everyday cognition before transition to mild cognitive impairment.

    PubMed

    Seelye, Adriana; Mattek, Nora; Sharma, Nicole; Riley, Thomas; Austin, Johanna; Wild, Katherine; Dodge, Hiroko H; Lore, Emily; Kaye, Jeffrey

    2018-02-01

    Subtle changes in instrumental activities of daily living often accompany the onset of mild cognitive impairment (MCI) but are difficult to measure using conventional tests. Weekly online survey metadata metrics, annual neuropsychological tests, and an instrumental activity of daily living questionnaire were examined in 110 healthy older adults with intact cognition (mean age = 85 years) followed up for up to 3.6 years; 29 transitioned to MCI during study follow-up. In the baseline period, incident MCI participants completed their weekly surveys 1.4 hours later in the day than stable cognitively intact participants, P = .03, d = 0.47. Significant associations were found between earlier survey start time of day and higher memory (r = -0.34; P < .001) and visuospatial test scores (r = -0.37; P < .0001). Longitudinally, incident MCI participants showed an increase in survey completion time by 3 seconds per month for more than the year before diagnosis compared with stable cognitively intact participants (β = 0.12, SE = 0.04, t = 2.8; P = .006). Weekly online survey metadata allowed for detection of changes in everyday cognition before transition to MCI. Published by Elsevier Inc.

  14. User's guide for mapIMG 3--Map image re-projection software package

    USGS Publications Warehouse

    Finn, Michael P.; Mattli, David M.

    2012-01-01

    Version 0.0 (1995), Dan Steinwand, U.S. Geological Survey (USGS)/Earth Resources Observation Systems (EROS) Data Center (EDC)--Version 0.0 was a command line version for UNIX that required four arguments: the input metadata, the output metadata, the input data file, and the output destination path. Version 1.0 (2003), Stephen Posch and Michael P. Finn, USGS/Mid-Continent Mapping Center (MCMC--Version 1.0 added a GUI interface that was built using the Qt library for cross platform development. Version 1.01 (2004), Jason Trent and Michael P. Finn, USGS/MCMC--Version 1.01 suggested bounds for the parameters of each projection. Support was added for larger input files, storage of the last used input and output folders, and for TIFF/ GeoTIFF input images. Version 2.0 (2005), Robert Buehler, Jason Trent, and Michael P. Finn, USGS/National Geospatial Technical Operations Center (NGTOC)--Version 2.0 added Resampling Methods (Mean, Mode, Min, Max, and Sum), updated the GUI design, and added the viewer/pre-viewer. The metadata style was changed to XML and was switched to a new naming convention. Version 3.0 (2009), David Mattli and Michael P. Finn, USGS/Center of Excellence for Geospatial Information Science (CEGIS)--Version 3.0 brings optimized resampling methods, an updated GUI, support for less than global datasets, UTM support and the whole codebase was ported to Qt4.

  15. Partnerships To Mine Unexploited Sources of Metadata.

    ERIC Educational Resources Information Center

    Reynolds, Regina Romano

    This paper discusses the metadata created for other purposes as a potential source of bibliographic data. The first section addresses collecting metadata by means of templates, including the Nordic Metadata Project's Dublin Core Metadata Template. The second section considers potential partnerships for re-purposing metadata for bibliographic use,…

  16. Multiplex KRASG12/G13 mutation testing of unamplified cell-free DNA from the plasma of patients with advanced cancers using droplet digital polymerase chain reaction.

    PubMed

    Janku, F; Huang, H J; Fujii, T; Shelton, D N; Madwani, K; Fu, S; Tsimberidou, A M; Piha-Paul, S A; Wheler, J J; Zinner, R G; Naing, A; Hong, D S; Karp, D D; Cabrilo, G; Kopetz, E S; Subbiah, V; Luthra, R; Kee, B K; Eng, C; Morris, V K; Karlin-Neumann, G A; Meric-Bernstam, F

    2017-03-01

    Cell-free DNA (cfDNA) from plasma offers easily obtainable material for KRAS mutation analysis. Novel, multiplex, and accurate diagnostic systems using small amounts of DNA are needed to further the use of plasma cfDNA testing in personalized therapy. Samples of 16 ng of unamplified plasma cfDNA from 121 patients with diverse progressing advanced cancers were tested with a KRASG12/G13 multiplex assay to detect the seven most common mutations in the hotspot of exon 2 using droplet digital polymerase chain reaction (ddPCR). The results were retrospectively compared to mutation analysis of archival primary or metastatic tumor tissue obtained at different points of clinical care. Eighty-eight patients (73%) had KRASG12/G13 mutations in archival tumor specimens collected on average 18.5 months before plasma analysis, and 78 patients (64%) had KRASG12/G13 mutations in plasma cfDNA samples. The two methods had initial overall agreement in 103 (85%) patients (kappa, 0.66; ddPCR sensitivity, 84%; ddPCR specificity, 88%). Of the 18 discordant cases, 12 (67%) were resolved by increasing the amount of cfDNA, using mutation-specific probes, or re-testing the tumor tissue, yielding overall agreement in 115 patients (95%; kappa 0.87; ddPCR sensitivity, 96%; ddPCR specificity, 94%). The presence of ≥ 6.2% of KRASG12/G13 cfDNA in the wild-type background was associated with shorter survival (P = 0.001). Multiplex detection of KRASG12/G13 mutations in a small amount of unamplified plasma cfDNA using ddPCR has good sensitivity and specificity and good concordance with conventional clinical mutation testing of archival specimens. A higher percentage of mutant KRASG12/G13 in cfDNA corresponded with shorter survival. © The Author 2016. Published by Oxford University Press on behalf of the European Society for Medical Oncology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  17. A Window to the World: Lessons Learned from NASA's Collaborative Metadata Curation Effort

    NASA Astrophysics Data System (ADS)

    Bugbee, K.; Dixon, V.; Baynes, K.; Shum, D.; le Roux, J.; Ramachandran, R.

    2017-12-01

    Well written descriptive metadata adds value to data by making data easier to discover as well as increases the use of data by providing the context or appropriateness of use. While many data centers acknowledge the importance of correct, consistent and complete metadata, allocating resources to curate existing metadata is often difficult. To lower resource costs, many data centers seek guidance on best practices for curating metadata but struggle to identify those recommendations. In order to assist data centers in curating metadata and to also develop best practices for creating and maintaining metadata, NASA has formed a collaborative effort to improve the Earth Observing System Data and Information System (EOSDIS) metadata in the Common Metadata Repository (CMR). This effort has taken significant steps in building consensus around metadata curation best practices. However, this effort has also revealed gaps in EOSDIS enterprise policies and procedures within the core metadata curation task. This presentation will explore the mechanisms used for building consensus on metadata curation, the gaps identified in policies and procedures, the lessons learned from collaborating with both the data centers and metadata curation teams, and the proposed next steps for the future.

  18. Influence of Fluidized Bed Quenching on the Mechanical Properties and Quality Index of T6 Tempered B319.2-Type Aluminum Alloys

    NASA Astrophysics Data System (ADS)

    Ragab, Kh. A.; Samuel, A. M.; Al-Ahmari, A. M. A.; Samuel, F. H.; Doty, H. W.

    2013-11-01

    The current study aimed to investigate the effect of fluidized sand bed (FB) quenching on the mechanical performance of B319.2 aluminum cast alloys. Traditional water and conventional hot air (CF) quenching media were used to establish a relevant comparison with FB quenching. Quality charts were generated using two models of quality indices to support the selection of material conditions on the basis of the proposed quality indices. The use of an FB for the direct quenching-aging treatment of B319.2 casting alloys yields greater UTS and YS values compared to conventional furnace quenched alloys. The strength values of T6 tempered B319 alloys are greater when quenched in water compared with those quenched in an FB or CF. For the same aging conditions (170°C/4h), the fluidized bed quenched-aged 319 alloys show nearly the same or better strength values than those quenched in water and then aged in a CF or an FB. Based on the quality charts developed for alloys subjected to different quenching media, higher quality index values are obtained by conventional furnace quenched-aged T6-tempered B319 alloys. The modification factor has the most significant effect on the quality results of the alloys investigated, for all heat treatment cycles, as compared to other metallurgical parameters. The results of alloys subjected to multi-temperature aging cycles reveal that the optimum strength properties of B319.2 alloys, however, is obtained by applying multi-temperature aging cycles such as, for example, 240 °C/2 h followed by 170 °C/8 h, rather than T6 aging treatments. The regression models indicate that the mean quality values of B319 alloys are highly quench sensitive due to the formation of a larger percent of clusters in Al-Si-Cu-Mg alloys. These clusters act as heterogeneous nucleation sites for precipitation and enhance the aging process.

  19. dREL: a relational expression language for dictionary methods.

    PubMed

    Spadaccini, Nick; Castleden, Ian R; du Boulay, Doug; Hall, Sydney R

    2012-08-27

    The provision of precise metadata is an important but a largely underrated challenge for modern science [Nature 2009, 461, 145]. We describe here a dictionary methods language dREL that has been designed to enable complex data relationships to be expressed as formulaic scripts in data dictionaries written in DDLm [Spadaccini and Hall J. Chem. Inf. Model.2012 doi:10.1021/ci300075z]. dREL describes data relationships in a simple but powerful canonical form that is easy to read and understand and can be executed computationally to evaluate or validate data. The execution of dREL expressions is not a substitute for traditional scientific computation; it is to provide precise data dependency information to domain-specific definitions and a means for cross-validating data. Some scientific fields apply conventional programming languages to methods scripts but these tend to inhibit both dictionary development and accessibility. dREL removes the programming barrier and encourages the production of the metadata needed for seamless data archiving and exchange in science.

  20. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Astrophysics Data System (ADS)

    Kozimor, J.; Habermann, T.; Powers, L. A.; Gordon, S.

    2016-12-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways. This situation has led to the development of metadata repositories that can ingest and output metadata in multiple dialects. As an operational example, the NASA Common Metadata Repository (CMR) includes three different metadata dialects (DIF, ECHO, and ISO 19115-2). These systems raise a new question for metadata providers: if I have a choice of metadata dialects, which should I use and how do I make that decision? We have developed a collection of metadata evaluation tools that can be used to evaluate metadata records in many dialects for completeness with respect to recommendations from many organizations and communities. We have applied these tools to over 8000 collection and granule metadata records in four different dialects. This large collection of identical content in multiple dialects enables us to address questions about metadata and dialect evolution and to answer those questions quantitatively. We will describe those tools and results from evaluating the NASA CMR metadata collection.

  1. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.

  2. ISO 19115 Experiences in NASA's Earth Observing System (EOS) ClearingHOuse (ECHO)

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Mitchell, A.

    2011-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing earth science data. As science research and the gathered data increases in complexity, so does the complexity and importance of descriptive metadata. To meet these growing needs, the metadata models required utilize richer and more mature metadata attributes. Categorizing, standardizing, and promulgating these metadata models to a politically, geographically, and scientifically diverse community is a difficult process. An integral component of metadata management within NASA's Earth Observing System Data and Information System (EOSDIS) is the Earth Observing System (EOS) ClearingHOuse (ECHO). ECHO is the core metadata repository for the EOSDIS data centers providing a centralized mechanism for metadata and data discovery and retrieval. ECHO has undertaken an internal restructuring to meet the changing needs of scientists, the consistent advancement in technology, and the advent of new standards such as ISO 19115. These improvements were based on the following tenets for data discovery and retrieval: + There exists a set of 'core' metadata fields recommended for data discovery. + There exists a set of users who will require the entire metadata record for advanced analysis. + There exists a set of users who will require a 'core' set metadata fields for discovery only. + There will never be a cessation of new formats or a total retirement of all old formats. + Users should be presented metadata in a consistent format of their choosing. In order to address the previously listed items, ECHO's new metadata processing paradigm utilizes the following approach: + Identify a cross-format set of 'core' metadata fields necessary for discovery. + Implement format-specific indexers to extract the 'core' metadata fields into an optimized query capability. + Archive the original metadata in its entirety for presentation to users requiring the full record. + Provide on-demand translation of 'core' metadata to any supported result format. Lessons learned by the ECHO team while implementing its new metadata approach to support usage of the ISO 19115 standard will be presented. These lessons learned highlight some discovered strengths and weaknesses in the ISO 19115 standard as it is introduced to an existing metadata processing system.

  3. Creating context for the experiment record. User-defined metadata: investigations into metadata usage in the LabTrove ELN.

    PubMed

    Willoughby, Cerys; Bird, Colin L; Coles, Simon J; Frey, Jeremy G

    2014-12-22

    The drive toward more transparency in research, the growing willingness to make data openly available, and the reuse of data to maximize the return on research investment all increase the importance of being able to find information and make links to the underlying data. The use of metadata in Electronic Laboratory Notebooks (ELNs) to curate experiment data is an essential ingredient for facilitating discovery. The University of Southampton has developed a Web browser-based ELN that enables users to add their own metadata to notebook entries. A survey of these notebooks was completed to assess user behavior and patterns of metadata usage within ELNs, while user perceptions and expectations were gathered through interviews and user-testing activities within the community. The findings indicate that while some groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users are making little attempts to use it, thereby endangering their ability to recover data in the future. A survey of patterns of metadata use in these notebooks, together with feedback from the user community, indicated that while a few groups are comfortable with metadata and are able to design a metadata structure that works effectively, many users adopt a "minimum required" approach to metadata. To investigate whether the patterns of metadata use in LabTrove were unusual, a series of surveys were undertaken to investigate metadata usage in a variety of platforms supporting user-defined metadata. These surveys also provided the opportunity to investigate whether interface designs in these other environments might inform strategies for encouraging metadata creation and more effective use of metadata in LabTrove.

  4. ASDC Collaborations and Processes to Ensure Quality Metadata and Consistent Data Availability

    NASA Astrophysics Data System (ADS)

    Trapasso, T. J.

    2017-12-01

    With the introduction of new tools, faster computing, and less expensive storage, increased volumes of data are expected to be managed with existing or fewer resources. Metadata management is becoming a heightened challenge from the increase in data volume, resulting in more metadata records needed to be curated for each product. To address metadata availability and completeness, NASA ESDIS has taken significant strides with the creation of the United Metadata Model (UMM) and Common Metadata Repository (CMR). These UMM helps address hurdles experienced by the increasing number of metadata dialects and the CMR provides a primary repository for metadata so that required metadata fields can be served through a growing number of tools and services. However, metadata quality remains an issue as metadata is not always inherent to the end-user. In response to these challenges, the NASA Atmospheric Science Data Center (ASDC) created the Collaboratory for quAlity Metadata Preservation (CAMP) and defined the Product Lifecycle Process (PLP) to work congruently. CAMP is unique in that it provides science team members a UI to directly supply metadata that is complete, compliant, and accurate for their data products. This replaces back-and-forth communication that often results in misinterpreted metadata. Upon review by ASDC staff, metadata is submitted to CMR for broader distribution through Earthdata. Further, approval of science team metadata in CAMP automatically triggers the ASDC PLP workflow to ensure appropriate services are applied throughout the product lifecycle. This presentation will review the design elements of CAMP and PLP as well as demonstrate interfaces to each. It will show the benefits that CAMP and PLP provide to the ASDC that could potentially benefit additional NASA Earth Science Data and Information System (ESDIS) Distributed Active Archive Centers (DAACs).

  5. A polarimetric scattering database for non-spherical ice particles at microwave wavelengths

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Jiang, Zhiyuan; Aydin, Kultegin; Verlinde, Johannes; Clothiaux, Eugene E.; Botta, Giovanni

    2016-10-01

    The atmospheric science community has entered a period in which electromagnetic scattering properties at microwave frequencies of realistically constructed ice particles are necessary for making progress on a number of fronts. One front includes retrieval of ice-particle properties and signatures from ground-based, airborne, and satellite-based radar and radiometer observations. Another front is evaluation of model microphysics by application of forward operators to their outputs and comparison to observations during case study periods. Yet a third front is data assimilation, where again forward operators are applied to databases of ice-particle scattering properties and the results compared to observations, with their differences leading to corrections of the model state. Over the past decade investigators have developed databases of ice-particle scattering properties at microwave frequencies and made them openly available. Motivated by and complementing these earlier efforts, a database containing polarimetric single-scattering properties of various types of ice particles at millimeter to centimeter wavelengths is presented. While the database presented here contains only single-scattering properties of ice particles in a fixed orientation, ice-particle scattering properties are computed for many different directions of the radiation incident on them. These results are useful for understanding the dependence of ice-particle scattering properties on ice-particle orientation with respect to the incident radiation. For ice particles that are small compared to the wavelength, the number of incident directions of the radiation is sufficient to compute reasonable estimates of their (randomly) orientation-averaged scattering properties. This database is complementary to earlier ones in that it contains complete (polarimetric) scattering property information for each ice particle - 44 plates, 30 columns, 405 branched planar crystals, 660 aggregates, and 640 conical graupel - and direction of incident radiation but is limited to four frequencies (X-, Ku-, Ka-, and W-bands), does not include temperature dependencies of the single-scattering properties, and does not include scattering properties averaged over randomly oriented ice particles. Rules for constructing the morphologies of ice particles from one database to the next often differ; consequently, analyses that incorporate all of the different databases will contain the most variability, while illuminating important differences between them. Publication of this database is in support of future analyses of this nature and comes with the hope that doing so helps contribute to the development of a database standard for ice-particle scattering properties, like the NetCDF (Network Common Data Form) CF (Climate and Forecast) or NetCDF CF/Radial metadata conventions.

  6. Metadata squared: enhancing its usability for volunteered geographic information and the GeoWeb

    USGS Publications Warehouse

    Poore, Barbara S.; Wolf, Eric B.; Sui, Daniel Z.; Elwood, Sarah; Goodchild, Michael F.

    2013-01-01

    The Internet has brought many changes to the way geographic information is created and shared. One aspect that has not changed is metadata. Static spatial data quality descriptions were standardized in the mid-1990s and cannot accommodate the current climate of data creation where nonexperts are using mobile phones and other location-based devices on a continuous basis to contribute data to Internet mapping platforms. The usability of standard geospatial metadata is being questioned by academics and neogeographers alike. This chapter analyzes current discussions of metadata to demonstrate how the media shift that is occurring has affected requirements for metadata. Two case studies of metadata use are presented—online sharing of environmental information through a regional spatial data infrastructure in the early 2000s, and new types of metadata that are being used today in OpenStreetMap, a map of the world created entirely by volunteers. Changes in metadata requirements are examined for usability, the ease with which metadata supports coproduction of data by communities of users, how metadata enhances findability, and how the relationship between metadata and data has changed. We argue that traditional metadata associated with spatial data infrastructures is inadequate and suggest several research avenues to make this type of metadata more interactive and effective in the GeoWeb.

  7. Evolutions in Metadata Quality

    NASA Astrophysics Data System (ADS)

    Gilman, J.

    2016-12-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This talk will cover how we encourage metadata authors to improve the metadata through the use of integrated rubrics of metadata quality and outreach efforts. In addition we'll demonstrate Humanizers, a technique for dealing with the symptoms of metadata issues. Humanizers allow CMR administrators to identify specific metadata issues that are fixed at runtime when the data is indexed. An example Humanizer is the aliasing of processing level "Level 1" to "1" to improve consistency across collections. The CMR currently indexes 35K collections and 300M granules.

  8. Differences in health related quality of life in the randomised ARTSCAN study; accelerated vs. conventional radiotherapy for head and neck cancer. A five year follow up.

    PubMed

    Nyqvist, Johanna; Fransson, Per; Laurell, Göran; Hammerlid, Eva; Kjellén, Elisabeth; Franzén, Lars; Söderström, Karin; Wickart-Johansson, Gun; Friesland, Signe; Sjödin, Helena; Brun, Eva; Ask, Anders; Nilsson, Per; Ekberg, Lars; Björk-Eriksson, Thomas; Nyman, Jan; Lödén, Britta; Lewin, Freddi; Reizenstein, Johan; Lundin, Erik; Zackrisson, Björn

    2016-02-01

    Health related quality of life (HRQoL) was assessed in the randomised, prospective ARTSCAN study comparing conventional radiotherapy (CF) with accelerated radiotherapy (AF) for head and neck cancer. 750 patients with squamous cell carcinoma (of any grade and stage) in the oral cavity, oro-, or hypopharynx or larynx (except T1-2, N0 glottic carcinoma) without distant metastases were randomised to either conventional fractionation (2 Gy/day, 5 days/week in 49 days, total dose 68 Gy) or accelerated fractionation (1.1+2.0 Gy/day, 5 days/week in 35 days, total dose 68 Gy). HRQoL was assessed with EORTC QLQ-C30, QLQ-H&N35 and HADS at baseline, at end of radiotherapy (eRT) and at 3 and 6 months and 1, 2 and 5 years after start of treatment. The AF group reported HRQoL was significantly lower at eRT and at 3 months for most symptoms, scales and functions. Few significant differences were noted between the groups at 6 months and 5 years. Scores related to functional oral intake never reached baseline. In comparison to CF, AF has a stronger adverse effect on HRQoL in the acute phase. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Metadata Means Communication: The Challenges of Producing Useful Metadata

    NASA Astrophysics Data System (ADS)

    Edwards, P. N.; Batcheller, A. L.

    2010-12-01

    Metadata are increasingly perceived as an important component of data sharing systems. For instance, metadata accompanying atmospheric model output may indicate the grid size, grid type, and parameter settings used in the model configuration. We conducted a case study of a data portal in the atmospheric sciences using in-depth interviews, document review, and observation. OUr analysis revealed a number of challenges in producing useful metadata. First, creating and managing metadata required considerable effort and expertise, yet responsibility for these tasks was ill-defined and diffused among many individuals, leading to errors, failure to capture metadata, and uncertainty about the quality of the primary data. Second, metadata ended up stored in many different forms and software tools, making it hard to manage versions and transfer between formats. Third, the exact meanings of metadata categories remained unsettled and misunderstood even among a small community of domain experts -- an effect we expect to be exacerbated when scientists from other disciplines wish to use these data. In practice, we found that metadata problems due to these obstacles are often overcome through informal, personal communication, such as conversations or email. We conclude that metadata serve to communicate the context of data production from the people who produce data to those who wish to use it. Thus while formal metadata systems are often public, critical elements of metadata (those embodied in informal communication) may never be recorded. Therefore, efforts to increase data sharing should include ways to facilitate inter-investigator communication. Instead of tackling metadata challenges only on the formal level, we can improve data usability for broader communities by better supporting metadata communication.

  11. Inheritance rules for Hierarchical Metadata Based on ISO 19115

    NASA Astrophysics Data System (ADS)

    Zabala, A.; Masó, J.; Pons, X.

    2012-04-01

    Mainly, ISO19115 has been used to describe metadata for datasets and services. Furthermore, ISO19115 standard (as well as the new draft ISO19115-1) includes a conceptual model that allows to describe metadata at different levels of granularity structured in hierarchical levels, both in aggregated resources such as particularly series, datasets, and also in more disaggregated resources such as types of entities (feature type), types of attributes (attribute type), entities (feature instances) and attributes (attribute instances). In theory, to apply a complete metadata structure to all hierarchical levels of metadata, from the whole series to an individual feature attributes, is possible, but to store all metadata at all levels is completely impractical. An inheritance mechanism is needed to store each metadata and quality information at the optimum hierarchical level and to allow an ease and efficient documentation of metadata in both an Earth observation scenario such as a multi-satellite mission multiband imagery, as well as in a complex vector topographical map that includes several feature types separated in layers (e.g. administrative limits, contour lines, edification polygons, road lines, etc). Moreover, and due to the traditional split of maps in tiles due to map handling at detailed scales or due to the satellite characteristics, each of the previous thematic layers (e.g. 1:5000 roads for a country) or band (Landsat-5 TM cover of the Earth) are tiled on several parts (sheets or scenes respectively). According to hierarchy in ISO 19115, the definition of general metadata can be supplemented by spatially specific metadata that, when required, either inherits or overrides the general case (G.1.3). Annex H of this standard states that only metadata exceptions are defined at lower levels, so it is not necessary to generate the full registry of metadata for each level but to link particular values to the general value that they inherit. Conceptually the metadata registry is complete for each metadata hierarchical level, but at the implementation level most of the metadata elements are not stored at both levels but only at more generic one. This communication defines a metadata system that covers 4 levels, describes which metadata has to support series-layer inheritance and in which way, and how hierarchical levels are defined and stored. Metadata elements are classified according to the type of inheritance between products, series, tiles and the datasets. It explains the metadata elements classification and exemplifies it using core metadata elements. The communication also presents a metadata viewer and edition tool that uses the described model to propagate metadata elements and to show to the user a complete set of metadata for each level in a transparent way. This tool is integrated in the MiraMon GIS software.

  12. The role of metadata in managing large environmental science datasets. Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melton, R.B.; DeVaney, D.M.; French, J. C.

    1995-06-01

    The purpose of this workshop was to bring together computer science researchers and environmental sciences data management practitioners to consider the role of metadata in managing large environmental sciences datasets. The objectives included: establishing a common definition of metadata; identifying categories of metadata; defining problems in managing metadata; and defining problems related to linking metadata with primary data.

  13. By, With, and Through: The Theory and Practice of Special Operations Capacity-Building

    DTIC Science & Technology

    2014-12-01

    The Colombian Military..........................................................94  3.  SOCSOUTH ON THE GROUND IN COLOMBIA .......................95...civil affairs operations CCOPE Colombian Joint Special Operations Command (Comando Conjunto de Operaciones Especiales) CERTE Colombian Army...Tactical Retraining Center CF Conventional forces CMO civil-military operations CMSE civil-military support element CNP Colombian National Police CNT

  14. The Development of Complexity, Accuracy and Fluency in the Written Production of L2 French

    ERIC Educational Resources Information Center

    Gunnarsson, Cecilia

    2012-01-01

    The present longitudinal case study investigated the development of fluency, complexity and accuracy--and the possible relationships between them--in the written production of L2 French. We assessed fluency and complexity in five intermediate learners by means of conventional indicators for written L2 (cf. Wolfe-Quintero et al. 1998), while…

  15. Building Format-Agnostic Metadata Repositories

    NASA Astrophysics Data System (ADS)

    Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will discuss the problems that surround persisting and discovering metadata in multiple formats; a set of tenets that must be addressed in a solution; and NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) proposed approach. In order to facilitate cross-discipline data analysis, Earth Scientists will potentially interact with more than one data source. The most common data discovery paradigm relies on services and/or applications facilitating the discovery and presentation of metadata. What may not be common are the formats in which the metadata are formatted. As the number of sources and datasets utilized for research increases, it becomes more likely that a researcher will encounter conflicting metadata formats. Metadata repositories, such as the EOS ClearingHOuse (ECHO), along with data centers, must identify ways to address this issue. In order to define the solution to this problem, the following tenets are identified: - There exists a set of ‘core’ metadata fields recommended for data discovery. - There exists a set of users who will require the entire metadata record for advanced analysis. - There exists a set of users who will require a ‘core’ set of metadata fields for discovery only. - There will never be a cessation of new formats or a total retirement of all old formats. - Users should be presented metadata in a consistent format. ECHO has undertaken an effort to transform its metadata ingest and discovery services in order to support the growing set of metadata formats. In order to address the previously listed items, ECHO’s new metadata processing paradigm utilizes the following approach: - Identify a cross-format set of ‘core’ metadata fields necessary for discovery. - Implement format-specific indexers to extract the ‘core’ metadata fields into an optimized query capability. - Archive the original metadata in its entirety for presentation to users requiring the full record. - Provide on-demand translation of ‘core’ metadata to any supported result format. With this identified approach, the Earth Scientist is provided with a consistent data representation as they interact with a variety of datasets that utilize multiple metadata formats. They are then able to focus their efforts on the more critical research activities which they are undertaking.

  16. Making Metadata Better with CMR and MMT

    NASA Technical Reports Server (NTRS)

    Gilman, Jason Arthur; Shum, Dana

    2016-01-01

    Ensuring complete, consistent and high quality metadata is a challenge for metadata providers and curators. The CMR and MMT systems provide providers and curators options to build in metadata quality from the start and also assess and improve the quality of already existing metadata.

  17. Individualized supervised resistance training during nebulization in adults with cystic fibrosis.

    PubMed

    Shaw, Ina; Kinsey, Janine E; Richards, Roxanne; Shaw, Brandon S

    2016-01-01

    Since dyspnea limits exercise adherence and intensity in cystic fibrosis (CF) patients, engaging in resistance training (RT), which causes less dyspnea than other exercise modalities, while using nebulizers could not only overcome this barrier, but also enhance long-term adaptations to treatment. The objective of this study was to examine the effects of RT during nebulization on spirometry, anthropometry, chest wall excursion, respiratory muscle strength and health-related quality of life (HRQOL). Fourteen male and female CF patients were assigned to a four-week, 20-minute, 5-day per week proof-of-concept RT group (RTG) (n=7) or non-exercising control group (CON) (n=7), with 3 CON patients later dropping out of the study. Patients performed whole body exercises for 3 sets of 10 reps using resistance bands, since such bands have previously demonstrated a greater effect on functional exercise capacity than conventional RT in lung patients. The RTG displayed significant (p≤0.05) increases in FEV 1 , FEV 1 /FVC, latissimusdorsi strength, pectoralis major clavicular portion strength, pectoralis major sternocostal portion strength and emotional and digestion HRQOL domains, while decreasing pectoralis minor strength on the left and social, body image and respiration HRQOL domains. This small scale proof-of-concept investigation demonstrates the multiple and simultaneous benefits of RT during nebulization in CF patients. The improvements in pulmonary measures are particularly promising especially since this study only made use of a four-week experimental period. This study provides an important alternative, time-saving treatment for the CF patient that does not add to the treatment burden of CF patients.

  18. Use of hypofractionated post-mastectomy radiotherapy reduces health costs by over $2000 per patient: An Australian perspective.

    PubMed

    Mortimer, Joshua W; McLachlan, Craig S; Hansen, Carmen J; Assareh, Hassan; Last, Andrew; McKay, Michael J; Shakespeare, Thomas P

    2016-02-01

    The most recent clinical practice guidelines released by Cancer Australia draw attention to unanswered questions concerning the health economic considerations associated with hypofractionated radiotherapy. This study aimed to quantify and compare the healthcare costs at a regional Australian radiotherapy institute with respect to conventionally fractionated post-mastectomy radiotherapy (Cf-PMRT) versus hypofractionated post-mastectomy radiotherapy (Hf-PMRT) administration. Medical records of 196 patients treated with post-mastectomy radiotherapy at the NSW North Coast Cancer Institute from February 2008 to June 2014 were retrospectively reviewed. Australian Medicare item numbers billed for patients receiving either Cf-PMRT of 50 Gy in 25 daily fractions or Hf-PMRT of 40.05 Gy in 15 daily fractions were calculated. Decision tree analysis was used to model costs. Independent-samples t-tests and Mann-Whitney U-tests were used to compare crude average costs for Cf-PMRT and Hf-PMRT and determine which treatment components accounted for any differences. Hf-PMRT, with or without irradiation to the regional lymph nodes, was associated with significantly reduced Medicare costs ($5613 AUD per patient for Hf-PMRT vs $8272 AUD per patient for Cf-PMRT; P < 0.001). Savings associated with Hf-PMRT ranged from $1353 (22.1%) for patients receiving no regional irradiation to $2898 (32.0%) for patients receiving both axillary and supraclavicular therapy. Hf-PMRT results in a significant reduction in the financial costs associated with treating breast cancer patients in a regional Australian setting when compared with Cf-PMRT. © 2015 The Royal Australian and New Zealand College of Radiologists.

  19. Evolution in Metadata Quality: Common Metadata Repository's Role in NASA Curation Efforts

    NASA Technical Reports Server (NTRS)

    Gilman, Jason; Shum, Dana; Baynes, Katie

    2016-01-01

    Metadata Quality is one of the chief drivers of discovery and use of NASA EOSDIS (Earth Observing System Data and Information System) data. Issues with metadata such as lack of completeness, inconsistency, and use of legacy terms directly hinder data use. As the central metadata repository for NASA Earth Science data, the Common Metadata Repository (CMR) has a responsibility to its users to ensure the quality of CMR search results. This poster covers how we use humanizers, a technique for dealing with the symptoms of metadata issues, as well as our plans for future metadata validation enhancements. The CMR currently indexes 35K collections and 300M granules.

  20. Describing environmental public health data: implementing a descriptive metadata standard on the environmental public health tracking network.

    PubMed

    Patridge, Jeff; Namulanda, Gonza

    2008-01-01

    The Environmental Public Health Tracking (EPHT) Network provides an opportunity to bring together diverse environmental and health effects data by integrating}?> local, state, and national databases of environmental hazards, environmental exposures, and health effects. To help users locate data on the EPHT Network, the network will utilize descriptive metadata that provide critical information as to the purpose, location, content, and source of these data. Since 2003, the Centers for Disease Control and Prevention's EPHT Metadata Subgroup has been working to initiate the creation and use of descriptive metadata. Efforts undertaken by the group include the adoption of a metadata standard, creation of an EPHT-specific metadata profile, development of an open-source metadata creation tool, and promotion of the creation of descriptive metadata by changing the perception of metadata in the public health culture.

  1. Metadata: Standards for Retrieving WWW Documents (and Other Digitized and Non-Digitized Resources)

    NASA Astrophysics Data System (ADS)

    Rusch-Feja, Diann

    The use of metadata for indexing digitized and non-digitized resources for resource discovery in a networked environment is being increasingly implemented all over the world. Greater precision is achieved using metadata than relying on universal search engines and furthermore, meta-data can be used as filtering mechanisms for search results. An overview of various metadata sets is given, followed by a more focussed presentation of Dublin Core Metadata including examples of sub-elements and qualifiers. Especially the use of the Dublin Core Relation element provides connections between the metadata of various related electronic resources, as well as the metadata for physical, non-digitized resources. This facilitates more comprehensive search results without losing precision and brings together different genres of information which would otherwise be only searchable in separate databases. Furthermore, the advantages of Dublin Core Metadata in comparison with library cataloging and the use of universal search engines are discussed briefly, followed by a listing of types of implementation of Dublin Core Metadata.

  2. Department of the Interior metadata implementation guide—Framework for developing the metadata component for data resource management

    USGS Publications Warehouse

    Obuch, Raymond C.; Carlino, Jennifer; Zhang, Lin; Blythe, Jonathan; Dietrich, Christopher; Hawkinson, Christine

    2018-04-12

    The Department of the Interior (DOI) is a Federal agency with over 90,000 employees across 10 bureaus and 8 agency offices. Its primary mission is to protect and manage the Nation’s natural resources and cultural heritage; provide scientific and other information about those resources; and honor its trust responsibilities or special commitments to American Indians, Alaska Natives, and affiliated island communities. Data and information are critical in day-to-day operational decision making and scientific research. DOI is committed to creating, documenting, managing, and sharing high-quality data and metadata in and across its various programs that support its mission. Documenting data through metadata is essential in realizing the value of data as an enterprise asset. The completeness, consistency, and timeliness of metadata affect users’ ability to search for and discover the most relevant data for the intended purpose; and facilitates the interoperability and usability of these data among DOI bureaus and offices. Fully documented metadata describe data usability, quality, accuracy, provenance, and meaning.Across DOI, there are different maturity levels and phases of information and metadata management implementations. The Department has organized a committee consisting of bureau-level points-of-contacts to collaborate on the development of more consistent, standardized, and more effective metadata management practices and guidance to support this shared mission and the information needs of the Department. DOI’s metadata implementation plans establish key roles and responsibilities associated with metadata management processes, procedures, and a series of actions defined in three major metadata implementation phases including: (1) Getting started—Planning Phase, (2) Implementing and Maintaining Operational Metadata Management Phase, and (3) the Next Steps towards Improving Metadata Management Phase. DOI’s phased approach for metadata management addresses some of the major data and metadata management challenges that exist across the diverse missions of the bureaus and offices. All employees who create, modify, or use data are involved with data and metadata management. Identifying, establishing, and formalizing the roles and responsibilities associated with metadata management are key to institutionalizing a framework of best practices, methodologies, processes, and common approaches throughout all levels of the organization; these are the foundation for effective data resource management. For executives and managers, metadata management strengthens their overarching views of data assets, holdings, and data interoperability; and clarifies how metadata management can help accelerate the compliance of multiple policy mandates. For employees, data stewards, and data professionals, formalized metadata management will help with the consistency of definitions, and approaches addressing data discoverability, data quality,  and data lineage. In addition to data professionals and others  associated with information technology; data stewards and program subject matter experts take on important metadata management roles and responsibilities as data flow through their respective business and science-related workflows.  The responsibilities of establishing, practicing, and  governing the actions associated with their specific metadata management roles are critical to successful metadata implementation.

  3. Making Interoperability Easier with the NASA Metadata Management Tool

    NASA Astrophysics Data System (ADS)

    Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.

    2016-12-01

    ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.

  4. Pulmonary disease in cystic fibrosis: assessment with chest CT at chest radiography dose levels.

    PubMed

    Ernst, Caroline W; Basten, Ines A; Ilsen, Bart; Buls, Nico; Van Gompel, Gert; De Wachter, Elke; Nieboer, Koenraad H; Verhelle, Filip; Malfroot, Anne; Coomans, Danny; De Maeseneer, Michel; de Mey, Johan

    2014-11-01

    To investigate a computed tomographic (CT) protocol with iterative reconstruction at conventional radiography dose levels for the assessment of structural lung abnormalities in patients with cystic fibrosis ( CF cystic fibrosis ). In this institutional review board-approved study, 38 patients with CF cystic fibrosis (age range, 6-58 years; 21 patients <18 years and 17 patients >18 years) underwent investigative CT (at minimal exposure settings combined with iterative reconstruction) as a replacement of yearly follow-up posteroanterior chest radiography. Verbal informed consent was obtained from all patients or their parents. CT images were randomized and rated independently by two radiologists with use of the Bhalla scoring system. In addition, mosaic perfusion was evaluated. As reference, the previous available conventional chest CT scan was used. Differences in Bhalla scores were assessed with the χ(2) test and intraclass correlation coefficients ( ICC intraclass correlation coefficient s). Radiation doses for CT and radiography were assessed for adults (>18 years) and children (<18 years) separately by using technical dose descriptors and estimated effective dose. Differences in dose were assessed with the Mann-Whitney U test. The median effective dose for the investigative protocol was 0.04 mSv (95% confidence interval [ CI confidence interval ]: 0.034 mSv, 0.10 mSv) for children and 0.05 mSv (95% CI confidence interval : 0.04 mSv, 0.08 mSv) for adults. These doses were much lower than those with conventional CT (median: 0.52 mSv [95% CI confidence interval : 0.31 mSv, 3.90 mSv] for children and 1.12 mSv [95% CI confidence interval : 0.57 mSv, 3.15 mSv] for adults) and of the same order of magnitude as those for conventional radiography (median: 0.012 mSv [95% CI confidence interval : 0.006 mSv, 0.022 mSv] for children and 0.012 mSv [95% CI confidence interval : 0.005 mSv, 0.031 mSv] for adults). All images were rated at least as diagnostically acceptable. Very good agreement was found in overall Bhalla score ( ICC intraclass correlation coefficient , 0.96) with regard to the severity of bronchiectasis ( ICC intraclass correlation coefficient , 0.87) and sacculations and abscesses ( ICC intraclass correlation coefficient , 0.84). Interobserver agreement was excellent ( ICC intraclass correlation coefficient , 0.86-1). For patients with CF cystic fibrosis , a dedicated chest CT protocol can replace the two yearly follow-up chest radiographic examinations without major dose penalty and with similar diagnostic quality compared with conventional CT.

  5. GraphMeta: Managing HPC Rich Metadata in Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Dong; Chen, Yong; Carns, Philip

    High-performance computing (HPC) systems face increasingly critical metadata management challenges, especially in the approaching exascale era. These challenges arise not only from exploding metadata volumes, but also from increasingly diverse metadata, which contains data provenance and arbitrary user-defined attributes in addition to traditional POSIX metadata. This ‘rich’ metadata is becoming critical to supporting advanced data management functionality such as data auditing and validation. In our prior work, we identified a graph-based model as a promising solution to uniformly manage HPC rich metadata due to its flexibility and generality. However, at the same time, graph-based HPC rich metadata anagement also introducesmore » significant challenges to the underlying infrastructure. In this study, we first identify the challenges on the underlying infrastructure to support scalable, high-performance rich metadata management. Based on that, we introduce GraphMeta, a graphbased engine designed for this use case. It achieves performance scalability by introducing a new graph partitioning algorithm and a write-optimal storage engine. We evaluate GraphMeta under both synthetic and real HPC metadata workloads, compare it with other approaches, and demonstrate its advantages in terms of efficiency and usability for rich metadata management in HPC systems.« less

  6. A novel sidestream ultrasonic flow sensor for multiple breath washout in children.

    PubMed

    Fuchs, Susanne I; Sturz, J; Junge, S; Ballmann, M; Gappa, M

    2008-08-01

    Inert gas multiple breath washout (MBW) for measuring Lung Clearance Index using mass spectrometry and 4% sulfur hexafluoride (SF(6)) as the tracer gas has been shown to be sensitive for detecting early Cystic Fibrosis (CF) lung disease. However, mass spectrometry requires bulky equipment and is expensive to buy and maintain. A novel sidestream ultrasonic device may overcome this problem. The aims of this study were to assess the feasibility and clinical validity of measuring lung volume (functional residual capacity, FRC) and the LCI using the sidestream ultrasonic flow sensor in children and adolescents with CF in relation to spirometry and plain chest radiographs. MBW using the sidestream ultrasonic device and conventional spirometry were performed in 26 patients with CF and 22 healthy controls. In the controls (4.7-17.7 years) LCI was similar to that reported using mass spectrometry (mean (SD) 6.7 (0.5)). LCI was elevated in 77% of the CF children (6.8-18.9 years), whereas spirometry was abnormal in only 38.5%, 61.5%, and 26.9% for FEV(1), MEF(25), and FEV(1)/FVC, respectively. This was more marked in children <10 years. LCI correlated with the Crispin-Norman score, whereas FEV(1) did not. Sidestream ultrasonic MBW is a valid and simple alternative to mass spectrometry for assessing ventilation homogeneity in children. (c) 2008 Wiley-Liss, Inc.

  7. Long-term outcomes of late course accelerated hyper-fractionated radiotherapy for localized esophageal carcinoma in Mainland China: a meta-analysis.

    PubMed

    Zhang, Y W; Chen, L; Bai, Y; Zheng, X

    2011-09-01

    Published data on the long-term survival results of patients with localized esophageal carcinoma receiving late course accelerated hyper-fractionated radiotherapy (LCAF RT) versus conventional fractionated radiotherapy (CF RT) are inconclusive. In order to derive a more precise estimation of the both treatment-regimes, a meta-analysis based on systematic review of published articles was performed. A meta-analysis was performed using trials identified through Pubmed and Chinese national knowledge infrastructure. Results in 5-year survival and 5-year local control were collected from randomized trials comparing LCAF RT with CF RT. Review Manager (The Cochrane Collaboration, Oxford, England) and Stata software (Stata Corporation, College Station, TX, USA) were used for data management. A total of 11 trials were involved in this analysis with 572 cases and 567 controls. Our results showed that LCAF RT, compared with CF RT, significantly improved the 5-year survival (odds ratio [OR]= 2.93, 95% confidence interval [CI]: 2.15-4.00, P < 0.00001) and 5-year local control (OR = 3.96, 95% CI: 2.91-5.38, P < 0.00001). LCAF RT was more therapeutically beneficial than CF RT in the localized esophageal carcinoma. © 2011 Copyright the Authors. Journal compilation © 2011, Wiley Periodicals, Inc. and the International Society for Diseases of the Esophagus.

  8. Metabolonote: A Wiki-Based Database for Managing Hierarchical Metadata of Metabolome Analyses

    PubMed Central

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics – technology for comprehensive detection of small molecules in an organism – lags behind the other “omics” in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called “Togo Metabolome Data” (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers’ understanding and use of data but also submitters’ motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/. PMID:25905099

  9. Metabolonote: a wiki-based database for managing hierarchical metadata of metabolome analyses.

    PubMed

    Ara, Takeshi; Enomoto, Mitsuo; Arita, Masanori; Ikeda, Chiaki; Kera, Kota; Yamada, Manabu; Nishioka, Takaaki; Ikeda, Tasuku; Nihei, Yoshito; Shibata, Daisuke; Kanaya, Shigehiko; Sakurai, Nozomu

    2015-01-01

    Metabolomics - technology for comprehensive detection of small molecules in an organism - lags behind the other "omics" in terms of publication and dissemination of experimental data. Among the reasons for this are difficulty precisely recording information about complicated analytical experiments (metadata), existence of various databases with their own metadata descriptions, and low reusability of the published data, resulting in submitters (the researchers who generate the data) being insufficiently motivated. To tackle these issues, we developed Metabolonote, a Semantic MediaWiki-based database designed specifically for managing metabolomic metadata. We also defined a metadata and data description format, called "Togo Metabolome Data" (TogoMD), with an ID system that is required for unique access to each level of the tree-structured metadata such as study purpose, sample, analytical method, and data analysis. Separation of the management of metadata from that of data and permission to attach related information to the metadata provide advantages for submitters, readers, and database developers. The metadata are enriched with information such as links to comparable data, thereby functioning as a hub of related data resources. They also enhance not only readers' understanding and use of data but also submitters' motivation to publish the data. The metadata are computationally shared among other systems via APIs, which facilitate the construction of novel databases by database developers. A permission system that allows publication of immature metadata and feedback from readers also helps submitters to improve their metadata. Hence, this aspect of Metabolonote, as a metadata preparation tool, is complementary to high-quality and persistent data repositories such as MetaboLights. A total of 808 metadata for analyzed data obtained from 35 biological species are published currently. Metabolonote and related tools are available free of cost at http://metabolonote.kazusa.or.jp/.

  10. Metadata (MD)

    Treesearch

    Robert E. Keane

    2006-01-01

    The Metadata (MD) table in the FIREMON database is used to record any information about the sampling strategy or data collected using the FIREMON sampling procedures. The MD method records metadata pertaining to a group of FIREMON plots, such as all plots in a specific FIREMON project. FIREMON plots are linked to metadata using a unique metadata identifier that is...

  11. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.

  12. Documentation Resources on the ESIP Wiki

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Kozimor, John; Gordon, Sean

    2017-01-01

    The ESIP community includes data providers and users that communicate with one another through datasets and metadata that describe them. Improving this communication depends on consistent high-quality metadata. The ESIP Documentation Cluster and the wiki play an important central role in facilitating this communication. We will describe and demonstrate sections of the wiki that provide information about metadata concept definitions, metadata recommendation, metadata dialects, and guidance pages. We will also describe and demonstrate the ISO Explorer, a tool that the community is developing to help metadata creators.

  13. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2015-04-01

    The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.

  14. Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.

    PubMed

    Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C

    2018-01-17

    Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.

  15. Fast processing of digital imaging and communications in medicine (DICOM) metadata using multiseries DICOM format

    PubMed Central

    Ismail, Mahmoud; Philbin, James

    2015-01-01

    Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117

  16. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  17. From the inside-out: Retrospectives on a metadata improvement process to advance the discoverability of NASÁs earth science data

    NASA Astrophysics Data System (ADS)

    Hernández, B. E.; Bugbee, K.; le Roux, J.; Beaty, T.; Hansen, M.; Staton, P.; Sisco, A. W.

    2017-12-01

    Earth observation (EO) data collected as part of NASA's Earth Observing System Data and Information System (EOSDIS) is now searchable via the Common Metadata Repository (CMR). The Analysis and Review of CMR (ARC) Team at Marshall Space Flight Center has been tasked with reviewing all NASA metadata records in the CMR ( 7,000 records). Each collection level record and constituent granule level metadata are reviewed for both completeness as well as compliance with the CMR's set of metadata standards, as specified in the Unified Metadata Model (UMM). NASA's Distributed Active Archive Centers (DAACs) have been harmonizing priority metadata records within the context of the inter-agency federal Big Earth Data Initiative (BEDI), which seeks to improve the discoverability, accessibility, and usability of EO data. Thus, the first phase of this project constitutes reviewing BEDI metadata records, while the second phase will constitute reviewing the remaining non-BEDI records in CMR. This presentation will discuss the ARC team's findings in terms of the overall quality of BEDI records across all DAACs as well as compliance with UMM standards. For instance, only a fifth of the collection-level metadata fields needed correction, compared to a quarter of the granule-level fields. It should be noted that the degree to which DAACs' metadata did not comply with the UMM standards may reflect multiple factors, such as recent changes in the UMM standards, and the utilization of different metadata formats (e.g. DIF 10, ECHO 10, ISO 19115-1) across the DAACs. Insights, constructive criticism, and lessons learned from this metadata review process will be contributed from both ORNL and SEDAC. Further inquiry along such lines may lead to insights which may improve the metadata curation process moving forward. In terms of the broader implications for metadata compliance with the UMM standards, this research has shown that a large proportion of the prioritized collections have already been made compliant, although the process of improving metadata quality is ongoing and iterative. Further research is also warranted into whether or not the gains in metadata quality are also driving gains in data use.

  18. Improvement of mechanical strength of sintered Mo alloyed steel by optimization of sintering and cold-forging processes with densification

    NASA Astrophysics Data System (ADS)

    Kamakoshi, Y.; Shohji, I.; Inoue, Y.; Fukuda, S.

    2017-10-01

    Powder metallurgy (P/M) materials have been expected to be spread in automotive industry. Generally, since sintered materials using P/M ones contain many pores and voids, mechanical properties of them are inferior to those of conventional wrought materials. To improve mechanical properties of the sintered materials, densification is effective. The aim of this study is to improve mechanical strength of sintered Mo-alloyed steel by optimizing conditions in sintering and cold-forging processes. Mo-alloyed steel powder was compacted. Then, pre-sintering (PS) using a vacuum sintering furnace was conducted. Subsequently, coldforging (CF) by a backward extrusion method was conducted to the pre-sintered specimen. Moreover, the cold-forged specimen was heat treated by carburizing, tempering and quenching (CQT). Afterwards, mechanical properties were investigated. As a result, it was found that the density of the PS specimen is required to be more than 7.4 Mg/m3 to strengthen the specimen by heat treatment after CF. Furthermore, density and the microstructure of the PS specimen are most important factors to make the high density and strength material by CF. At the CF load of 1200 kN, the maximum density ratio reached approximately 99% by the use of the PS specimen with proper density and microstructure. At the CF load of 900 kN, although density ratio was high like more than 97.8%, transverse rupture strength decreased sharply. Since densification caused high shear stress and stress concentration in the surface layer, microcracks occurred by the damages of inter-particle sintered connection of the surface layer. On the contrary, in case of the CF load of 1200 kN, ultra-densification of the surface layer occurred by a sufficient plastic flow. Such sufficient compressed specimens regenerated the sintered connections by high temperature heat treatment and thus the high strength densified material was obtained. These processes can be applicable to near net shape manufacturing without surface machining.

  19. A study on characteristic of different sample pretreatment methods to evaluate the entrapment efficiency of liposomes.

    PubMed

    Ran, Congcong; Chen, Dan; Xu, Meng; Du, Chaohui; Li, Qinglian; Jiang, Ye

    2016-08-15

    To examine how methods affect the evaluation of entrapment efficiency (EE) of liposomes, four different sample pretreatment methods were adopted in the experiment. The four sample pretreatment methods were size-exclusion chromatography (SEC), solid-phase extraction (SPE), centrifugation ultrafiltration (CF-UF) and hollow fiber centrifugal ultrafiltration (HF-CF-UF). Amphotericin B (AmB), which could self-associate to form aggregates in water is adopted as the model drugs in this paper. In the present work, it was found that the characterization results of four methods were quite different. The EE of liposome by SEC was about 93%, only 5-13% using C18 or HLB columns, and approximately 100% by CF-UF. The EE of HF-CF-UF reached up to nearly 99.0%. Further, this paper revealed the reasons making the difference of EE among four methods. Conventional SEC may distort the authentic of EE of liposomes with mainly employing some small liposomes or excessive water as eluent. For SPE, cholesterol on liposome surface could interact with the stationary phase making it hard to elute with water, and increase the risk of liposome leakage. While for CF-UF, concentration polarization was a main limitation hindering unentrapped drug to pass through membrane, making unentrapped drug undetectable in liposome. HF-CF-UF could truly reflect EE of liposomes with the concentration of unentrapped AmB lower than 25.0μg/mL. However, when the concentration was higher than 25.0μg/mL, AmB aggregates could be entrapped by hollow fiber. From the above analysis, this paper came to the conclusion that each method had its own feature in characterization. This study provided a reasonable guideline for choosing methods to character the EE of liposome. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Rapid point-of-care testing for epidermal growth factor receptor gene mutations in patients with lung cancer using cell-free DNA from cytology specimen supernatants.

    PubMed

    Asaka, Shiho; Yoshizawa, Akihiko; Saito, Kazusa; Kobayashi, Yukihiro; Yamamoto, Hiroshi; Negishi, Tatsuya; Nakata, Rie; Matsuda, Kazuyuki; Yamaguchi, Akemi; Honda, Takayuki

    2018-06-01

    Epidermal growth factor receptor (EGFR) mutations are associated with responses to EGFR tyrosine kinase inhibitors (EGFR-TKIs) in non-small-cell lung cancer (NSCLC). Our previous study revealed a rapid point-of-care system for detecting EGFR mutations. This system analyzes cell pellets from cytology specimens using droplet-polymerase chain reaction (d-PCR), and has a reaction time of 10 min. The present study aimed to validate the performance of the EGFR d-PCR assay using cell-free DNA (cfDNA) from supernatants obtained from cytology specimens. Assay results from cfDNA supernatant analyses were compared with those from cell pellets for 90 patients who were clinically diagnosed with, or suspected of having, lung cancer (80 bronchial lavage fluid samples, nine pleural effusion samples and one spinal fluid sample). EGFR mutations were identified in 12 and 15 cases using cfDNA supernatants and cell pellets, respectively. The concordance rates between cfDNA-supernatant and cell‑pellet assay results were 96.7% [kappa coefficient (K)=0.87], 98.9% (K=0.94), 98.9% (K=0.79) and 98.9% (K=0.79) for total EGFR mutations, L858R, E746_A750del and T790M, respectively. All 15 patients with EGFR mutation-positive results, as determined by EGFR d-PCR assay using cfDNA supernatants or cell pellets, also displayed positive results by conventional EGFR assays using tumor tissue or cytology specimens. Notably, EGFR mutations were even detected in five cfDNA supernatants for which the cytological diagnoses of the corresponding cell pellets were 'suspicious for malignancy', 'atypical' or 'negative for malignancy.' In conclusion, this rapid point-of-care system may be considered a promising novel screening method that may enable patients with NSCLC to receive EGFR-TKI therapy more rapidly, whilst also reserving cell pellets for additional morphological and molecular analyses.

  1. Coordination polymer flexibility leads to polymorphism and enables a crystalline solid-vapour reaction: a multi-technique mechanistic study.

    PubMed

    Vitórica-Yrezábal, Iñigo J; Libri, Stefano; Loader, Jason R; Mínguez Espallargas, Guillermo; Hippler, Michael; Fletcher, Ashleigh J; Thompson, Stephen P; Warren, John E; Musumeci, Daniele; Ward, Michael D; Brammer, Lee

    2015-06-08

    Despite an absence of conventional porosity, the 1D coordination polymer [Ag4 (O2 C(CF2 )2 CF3 )4 (TMP)3 ] (1; TMP=tetramethylpyrazine) can absorb small alcohols from the vapour phase, which insert into AgO bonds to yield coordination polymers [Ag4 (O2 C(CF2 )2 CF3 )4 (TMP)3 (ROH)2 ] (1-ROH; R=Me, Et, iPr). The reactions are reversible single-crystal-to-single-crystal transformations. Vapour-solid equilibria have been examined by gas-phase IR spectroscopy (K=5.68(9)×10(-5) (MeOH), 9.5(3)×10(-6) (EtOH), 6.14(5)×10(-5) (iPrOH) at 295 K, 1 bar). Thermal analyses (TGA, DSC) have enabled quantitative comparison of two-step reactions 1-ROH→1→2, in which 2 is the 2D coordination polymer [Ag4 (O2 C(CF2 )2 CF3 )4 (TMP)2 ] formed by loss of TMP ligands exclusively from singly-bridging sites. Four polymorphic forms of 1 (1-A(LT) , 1-A(HT) , 1-B(LT) and 1-B(HT) ; HT=high temperature, LT=low temperature) have been identified crystallographically. In situ powder X-ray diffraction (PXRD) studies of the 1-ROH→1→2 transformations indicate the role of the HT polymorphs in these reactions. The structural relationship between polymorphs, involving changes in conformation of perfluoroalkyl chains and a change in orientation of entire polymers (A versus B forms), suggests a mechanism for the observed reactions and a pathway for guest transport within the fluorous layers. Consistent with this pathway, optical microscopy and AFM studies on single crystals of 1-MeOH/1-A(HT) show that cracks parallel to the layers of interdigitated perfluoroalkyl chains develop during the MeOH release/uptake process. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Perception of first respiratory infection with Pseudomonas aeruginosa by people with cystic fibrosis and those close to them: an online qualitative study.

    PubMed

    Palser, Sally C; Rayner, Oliver C; Leighton, Paul A; Smyth, Alan R

    2016-12-28

    People with cystic fibrosis (CF) are susceptible to respiratory infection with Pseudomonas aeruginosa (PA), which may become chronic if initial eradication fails. Environmental acquisition and person-to-person transmission can occur. Respiratory PA infection is associated with increased mortality and more hospitalisations. This may cause patients and families anxiety and lead them to adopt preventive measures which may be ineffectual and intrusive. It is not possible to hold a conventional focus group to explore these issues because people with CF cannot meet together due to the risk of cross-infection. To explore the perceptions of first respiratory infection with PA in people with CF and those close to them. We designed an online survey, to maximise accessibility and avoid the risk of cross-infection. This established the respondent's relationship with CF, asked 3 open questions about perceptions of PA and a final question about the prioritisation of research. Responses were analysed using a structured, iterative process. We identified keywords, analysed these incontext and derived key themes. Promotion through social media allowed respondents from any country to participate. People with CF and those close to them. Responses were received from 393 people, including 266 parents and 97 people with CF. The key themes were the emotional burden of PA (fear in particular); the burden of treatment PA entails and the need for accurate knowledge about PA. Lack of knowledge and the health beliefs of individuals may promote fear of infection and inappropriate avoidance measures. Uncertainty about the implications of PA infection and the treatment required may cause anxiety. Healthcare professionals should provide clear information about how PA might be acquired and the treatment necessary, making clear the limitations of current understanding and acknowledging health beliefs. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. Rice Performance and Water Use Efficiency under Plastic Mulching with Drip Irrigation

    PubMed Central

    He, Haibing; Ma, Fuyu; Yang, Ru; Chen, Lin; Jia, Biao; Cui, Jing; Fan, Hua; Wang, Xin; Li, Li

    2013-01-01

    Plastic mulching with drip irrigation is a new water-saving rice cultivation technology, but little is known on its productivity and water-saving capacity. This study aimed to assess the production potential, performance, and water use efficiency (WUE) of rice under plastic mulching with drip irrigation. Field experiments were conducted over 2 years with two rice cultivars under different cultivation systems: conventional flooding (CF), non-flooded irrigation incorporating plastic mulching with furrow irrigation (FIM), non-mulching with furrow irrigation (FIN), and plastic mulching with drip irrigation (DI). Compared with the CF treatment, grain yields were reduced by 31.76–52.19% under the DI treatment, by 57.16–61.02% under the FIM treatment, by 74.40–75.73% under the FIN treatment, which were mainly from source limitation, especially a low dry matter accumulation during post-anthesis, in non-flooded irrigation. WUE was the highest in the DI treatment, being 1.52–2.12 times higher than with the CF treatment, 1.35–1.89 times higher than with the FIM treatment, and 2.37–3.78 times higher than with the FIN treatment. The yield contribution from tillers (YCFTs) was 50.65–62.47% for the CF treatment and 12.07–20.62% for the non-flooded irrigation treatments. These low YCFTs values were attributed to the poor performance in tiller panicles rather than the total tiller number. Under non-flooded irrigation, root length was significantly reduced with more roots distributed in deep soil layers compared with the CF treatment; the DI treatment had more roots in the topsoil layer than the FIM and FIN treatments. The experiment demonstrates that the DI treatment has greater water saving capacity and lower yield and economic benefit gaps than the FIM and FIN treatments compared with the CF treatment, and would therefore be a better water-saving technology in areas of water scarcity. PMID:24340087

  4. Forum Guide to Metadata: The Meaning behind Education Data. NFES 2009-805

    ERIC Educational Resources Information Center

    National Forum on Education Statistics, 2009

    2009-01-01

    The purpose of this guide is to empower people to more effectively use data as information. To accomplish this, the publication explains what metadata are; why metadata are critical to the development of sound education data systems; what components comprise a metadata system; what value metadata bring to data management and use; and how to…

  5. Metadata Effectiveness in Internet Discovery: An Analysis of Digital Collection Metadata Elements and Internet Search Engine Keywords

    ERIC Educational Resources Information Center

    Yang, Le

    2016-01-01

    This study analyzed digital item metadata and keywords from Internet search engines to learn what metadata elements actually facilitate discovery of digital collections through Internet keyword searching and how significantly each metadata element affects the discovery of items in a digital repository. The study found that keywords from Internet…

  6. A novel framework for assessing metadata quality in epidemiological and public health research settings

    PubMed Central

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly. PMID:27570670

  7. A novel framework for assessing metadata quality in epidemiological and public health research settings.

    PubMed

    McMahon, Christiana; Denaxas, Spiros

    2016-01-01

    Metadata are critical in epidemiological and public health research. However, a lack of biomedical metadata quality frameworks and limited awareness of the implications of poor quality metadata renders data analyses problematic. In this study, we created and evaluated a novel framework to assess metadata quality of epidemiological and public health research datasets. We performed a literature review and surveyed stakeholders to enhance our understanding of biomedical metadata quality assessment. The review identified 11 studies and nine quality dimensions; none of which were specifically aimed at biomedical metadata. 96 individuals completed the survey; of those who submitted data, most only assessed metadata quality sometimes, and eight did not at all. Our framework has four sections: a) general information; b) tools and technologies; c) usability; and d) management and curation. We evaluated the framework using three test cases and sought expert feedback. The framework can assess biomedical metadata quality systematically and robustly.

  8. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  9. Towards Data Value-Level Metadata for Clinical Studies.

    PubMed

    Zozus, Meredith Nahm; Bonner, Joseph

    2017-01-01

    While several standards for metadata describing clinical studies exist, comprehensive metadata to support traceability of data from clinical studies has not been articulated. We examine uses of metadata in clinical studies. We examine and enumerate seven sources of data value-level metadata in clinical studies inclusive of research designs across the spectrum of the National Institutes of Health definition of clinical research. The sources of metadata inform categorization in terms of metadata describing the origin of a data value, the definition of a data value, and operations to which the data value was subjected. The latter is further categorized into information about changes to a data value, movement of a data value, retrieval of a data value, and data quality checks, constraints or assessments to which the data value was subjected. The implications of tracking and managing data value-level metadata are explored.

  10. Managing Complex Change in Clinical Study Metadata

    PubMed Central

    Brandt, Cynthia A.; Gadagkar, Rohit; Rodriguez, Cesar; Nadkarni, Prakash M.

    2004-01-01

    In highly functional metadata-driven software, the interrelationships within the metadata become complex, and maintenance becomes challenging. We describe an approach to metadata management that uses a knowledge-base subschema to store centralized information about metadata dependencies and use cases involving specific types of metadata modification. Our system borrows ideas from production-rule systems in that some of this information is a high-level specification that is interpreted and executed dynamically by a middleware engine. Our approach is implemented in TrialDB, a generic clinical study data management system. We review approaches that have been used for metadata management in other contexts and describe the features, capabilities, and limitations of our system. PMID:15187070

  11. A Digital Broadcast Item (DBI) enabling metadata repository for digital, interactive television (digiTV) feedback channel networks

    NASA Astrophysics Data System (ADS)

    Lugmayr, Artur R.; Mailaparampil, Anurag; Tico, Florina; Kalli, Seppo; Creutzburg, Reiner

    2003-01-01

    Digital television (digiTV) is an additional multimedia environment, where metadata is one key element for the description of arbitrary content. This implies adequate structures for content description, which is provided by XML metadata schemes (e.g. MPEG-7, MPEG-21). Content and metadata management is the task of a multimedia repository, from which digiTV clients - equipped with an Internet connection - can access rich additional multimedia types over an "All-HTTP" protocol layer. Within this research work, we focus on conceptual design issues of a metadata repository for the storage of metadata, accessible from the feedback channel of a local set-top box. Our concept describes the whole heterogeneous life-cycle chain of XML metadata from the service provider to the digiTV equipment, device independent representation of content, accessing and querying the metadata repository, management of metadata related to digiTV, and interconnection of basic system components (http front-end, relational database system, and servlet container). We present our conceptual test configuration of a metadata repository that is aimed at a real-world deployment, done within the scope of the future interaction (fiTV) project at the Digital Media Institute (DMI) Tampere (www.futureinteraction.tv).

  12. Metazen – metadata capture for metagenomes

    PubMed Central

    2014-01-01

    Background As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusions Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility. PMID:25780508

  13. Metazen - metadata capture for metagenomes.

    PubMed

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; Glass, Elizabeth; Wilke, Andreas; Meyer, Folker

    2014-01-01

    As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. Unfortunately, these tools are not specifically designed for metagenomic surveys; in particular, they lack the appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.

  14. Improving Access to NASA Earth Science Data through Collaborative Metadata Curation

    NASA Astrophysics Data System (ADS)

    Sisco, A. W.; Bugbee, K.; Shum, D.; Baynes, K.; Dixon, V.; Ramachandran, R.

    2017-12-01

    The NASA-developed Common Metadata Repository (CMR) is a high-performance metadata system that currently catalogs over 375 million Earth science metadata records. It serves as the authoritative metadata management system of NASA's Earth Observing System Data and Information System (EOSDIS), enabling NASA Earth science data to be discovered and accessed by a worldwide user community. The size of the EOSDIS data archive is steadily increasing, and the ability to manage and query this archive depends on the input of high quality metadata to the CMR. Metadata that does not provide adequate descriptive information diminishes the CMR's ability to effectively find and serve data to users. To address this issue, an innovative and collaborative review process is underway to systematically improve the completeness, consistency, and accuracy of metadata for approximately 7,000 data sets archived by NASA's twelve EOSDIS data centers, or Distributed Active Archive Centers (DAACs). The process involves automated and manual metadata assessment of both collection and granule records by a team of Earth science data specialists at NASA Marshall Space Flight Center. The team communicates results to DAAC personnel, who then make revisions and reingest improved metadata into the CMR. Implementation of this process relies on a network of interdisciplinary collaborators leveraging a variety of communication platforms and long-range planning strategies. Curating metadata at this scale and resolving metadata issues through community consensus improves the CMR's ability to serve current and future users and also introduces best practices for stewarding the next generation of Earth Observing System data. This presentation will detail the metadata curation process, its outcomes thus far, and also share the status of ongoing curation activities.

  15. CMR Metadata Curation

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Bugbee, Kaylin

    2017-01-01

    This talk explains the ongoing metadata curation activities in the Common Metadata Repository. It explores tools that exist today which are useful for building quality metadata and also opens up the floor for discussions on other potentially useful tools.

  16. Could solutions low in glucose degradation products preserve residual renal function in incident peritoneal dialysis patients? A 1-year multicenter prospective randomized controlled trial (Balnet Study).

    PubMed

    Kim, Sung Gyun; Kim, Sejoong; Hwang, Young-Hwan; Kim, Kiwon; Oh, Ji Eun; Chung, Wookyung; Oh, Kook-Hwan; Kim, Hyung Jik; Ahn, Curie

    2008-06-01

    In vitro studies of peritoneal dialysis (PD) solutions demonstrated that a lactate-buffered fluid with neutral pH and low glucose degradation products (LF) has better biocompatibility than a conventional acidic lactate-buffered fluid (CF). However, few clinical trials have evaluated the long-term benefit of the biocompatible solution on residual renal function (RRF). To compare LF with CF, we performed a prospective, randomized study with patients starting PD. After 1-month run-in period, 91 new PD patients were randomized for 12 months of treatment with either LF (Balance: Fresenius Medical Care, Bad Homburg, Germany; n = 48) or CF (Stay Safe: Fresenius; n = 43). We measured RRF, acid-base balance, peritoneal equilibration test, and adequacy of dialysis every 6 months after the run-in period. After 12 months of treatment, the residual glomerular filtration rate (GFR) in patients using LF tended to be higher than that of patients on CF (p = 0.057 by repeated-measures analysis of variance). We observed a significant difference in the changes of residual GFR between the two groups (p = 0.009), a difference that was especially marked in the subgroup whose baseline residual GFR was more than 2 mL/min/1.73 m(2). In addition, serum total CO(2) levels were higher (p = 0.001) and serum anion gap was lower (p = 0.019) in the LF group. We observed no differences between groups for Kt/V, C-reactive protein, or normalized protein equivalent of nitrogen appearance. In incident PD patients with significant residual GFR, LF may better preserve RRF over a 12-month treatment period. Additionally, pH-neutral PD fluid may improve acid-base balance as compared with CF.

  17. Single-cell high resolution melting analysis: A novel, generic, pre-implantation genetic diagnosis (PGD) method applied to cystic fibrosis (HRMA CF-PGD).

    PubMed

    Destouni, A; Poulou, M; Kakourou, G; Vrettou, C; Tzetis, M; Traeger-Synodinos, J; Kitsiou-Tzeli, S

    2016-03-01

    Institutions offering CF-PGD face the challenge of developing and optimizing single cell genotyping protocols that should cover for the extremely heterogeneous CF mutation spectrum. Here we report the development and successful clinical application of a generic CF-PGD protocol to facilitate direct detection of any CFTR nucleotide variation(s) by HRMA and simultaneous confirmation of diagnosis through haplotype analysis. A multiplex PCR was optimized supporting co-amplification of any CFTR exon-region, along with 6 closely linked STRs. Single cell genotypes were established through HRM analysis following melting of the 2nd round PCR products and were confirmed by STR haplotype analysis of the 1st PCR products. The protocol was validated pre-clinically, by testing 208 single lymphocytes, isolated from whole blood samples from 4 validation family trios. Fifteen PGD cycles were performed and 103 embryos were biopsied. In 15 clinical PGD cycles, genotypes were achieved in 88/93 (94.6%) embryo biopsy samples, of which 57/88 (64.8%) were deemed genetically suitable for embryo transfer. Amplification failed at all loci for 10/103 blastomeres biopsied from poor quality embryos. Six clinical pregnancies were achieved (2 twin, 4 singletons). PGD genotypes were confirmed following conventional amniocentesis or chorionic villus sampling in all achieved pregnancies. The single cell HRMA CF-PGD protocol described herein is a flexible, generic, low cost and robust genotyping method, which facilitates the analysis of any CFTR genotype combination. Single-cell HRMA can be beneficial to other clinical settings, for example the detection of single nucleotide variants in single cells derived from clinical tumor samples. Copyright © 2015 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.

  18. The Effect of Comparatively-Framed versus Similarity-Framed E-Cigarette and Snus Print Ads on Young Adults’ Ad and Product Perceptions

    PubMed Central

    Banerjee, Smita C.; Greene, Kathryn; Li, Yuelin; Ostroff, Jamie S.

    2016-01-01

    Objectives This study examined the effects of comparative-framing [C-F; ads highlighting differences between the advertised product and conventional cigarettes and/or smokeless tobacco products] versus similarity-framing (S-F; ads highlighting congruence with conventional cigarettes and/or smokeless tobacco products) in e-cigarette and snus ads on young adult smokers’ and non-smokers’ ad- and product-related perceptions. Methods One thousand fifty one (1,051) young adults (18–24 years; 76% women; 50% smokers) from existing consumer panels were recruited in a within-subjects quasi-experiment. Each participant viewed 4 online advertisements, varied by tobacco product type (e-cigarette or snus) and ad framing (C-F or S-F). The dependent measures for this study were ad-related (ad perceptions, ad credibility) and product-related perceptions (absolute and comparative risk perceptions, product appeal, and product use intentions). Results Former and current smokers rated C-F ads as more persuasive than S-F ads, as evidenced by favorable ad perceptions and high product use intentions. Former and current smokers also rated e-cigarette ads with more favorable ad perceptions, low absolute and comparative risk perceptions, high product appeal, and high product use intentions as compared to snus ads. However, the effect sizes of the significant differences are less than.2, indicating small magnitude of difference between the study variables. Conclusions Unless FDA regulates e-cig and snus advertising, there is a potential of decreasing risk perceptions and increasing use of e-cigs among young adults. Further research on implicit/explicit comparative claims in e-cigarettes and snus advertisements that encourage risk misperceptions is recommended. PMID:28042597

  19. The Effect of Comparatively-Framed versus Similarity-Framed E-Cigarette and Snus Print Ads on Young Adults' Ad and Product Perceptions.

    PubMed

    Banerjee, Smita C; Greene, Kathryn; Li, Yuelin; Ostroff, Jamie S

    2016-07-01

    This study examined the effects of comparative-framing [C-F; ads highlighting differences between the advertised product and conventional cigarettes and/or smokeless tobacco products] versus similarity-framing (S-F; ads highlighting congruence with conventional cigarettes and/or smokeless tobacco products) in e-cigarette and snus ads on young adult smokers' and non-smokers' ad- and product-related perceptions. One thousand fifty one (1,051) young adults (18-24 years; 76% women; 50% smokers) from existing consumer panels were recruited in a within-subjects quasi-experiment. Each participant viewed 4 online advertisements, varied by tobacco product type (e-cigarette or snus) and ad framing (C-F or S-F). The dependent measures for this study were ad-related (ad perceptions, ad credibility) and product-related perceptions (absolute and comparative risk perceptions, product appeal, and product use intentions). Former and current smokers rated C-F ads as more persuasive than S-F ads, as evidenced by favorable ad perceptions and high product use intentions. Former and current smokers also rated e-cigarette ads with more favorable ad perceptions, low absolute and comparative risk perceptions, high product appeal, and high product use intentions as compared to snus ads. However, the effect sizes of the significant differences are less than.2, indicating small magnitude of difference between the study variables. Unless FDA regulates e-cig and snus advertising, there is a potential of decreasing risk perceptions and increasing use of e-cigs among young adults. Further research on implicit/explicit comparative claims in e-cigarettes and snus advertisements that encourage risk misperceptions is recommended.

  20. Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.

    PubMed

    Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M

    2001-03-01

    Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike-occurrence time after stimulus onset must capture at least the first, second and third moments of the temporal distribution of error signals.

  1. Development of ITSASGIS-5D: seeking interoperability between Marine GIS layers and scientific multidimensional data using open source tools and OGC services for multidisciplinary research.

    NASA Astrophysics Data System (ADS)

    Sagarminaga, Y.; Galparsoro, I.; Reig, R.; Sánchez, J. A.

    2012-04-01

    Since 2000, an intense effort was conducted in AZTI's Marine Research Division to set up a data management system which could gather all the marine datasets that were being produced by different in-house research projects. For that, a corporative GIS was designed that included a data and metadata repository, a database, a layer catalog & search application and an internet map viewer. Several layers, mostly dealing with physical, chemical and biological in-situ sampling, and basic and thematic cartography including bathymetry, geomorphology, different species habitat maps, and human pressure and activities maps, were successfully gathered in this system. Very soon, it was realised that new marine technologies yielding continuous multidimensional data, sometimes called FES (Fluid Earth System) data, were difficult to handle in this structure. The data affected, mainly included numerical oceanographic and meteorological models, remote sensing data, coastal RADAR data, and some in-situ observational systems such as CTD's casts, moored or lagrangian buoys, etc. A management system for gridded multidimensional data was developed using standardized formats (netcdf using CF conventions) and tools such as THREDDS catalog (UNIDATA/UCAR) providing web services such as OPENDAP, NCSS, and WCS, as well as ncWMS service developed by the Reading e-science Center. At present, a system (ITSASGIS-5D) is being developed, based on OGC standards and open-source tools to allow interoperability between all the data types mentioned before. This system includes, in the server side, postgresql/postgis databases and geoserver for GIS layers, and THREDDS/Opendap and ncWMS services for FES gridded data. Moreover, an on-line client is being developed to allow joint access, user configuration, data visualisation & query and data distribution. This client is using mapfish, ExtJS - GeoEXT, and openlayers libraries. Through this presentation the elements of the first released version of this system will be described and showed, together with the new topics to be developed in new versions that include among others, the integration of geoNetwork libraries and tools for both FES and GIS metadata management, and the use of new OGC Sensor Observation Services (SOS) to integrate non gridded multidimensional data such as time series, depth profiles or trajectories provided by different observational systems. The final aim of this approach is to contribute to the multidisciplinary access and use of marine data for management and research activities, and facilitate the implementation of integrated ecosystem based approaches in the fields of fisheries advice and management, marine spatial planning, or the implementation of the European policies such as the Water Framework Directive, the Marine Strategy Framework Directive or the Habitat Framework Directive.

  2. Linking netCDF Data with the Semantic Web - Enhancing Data Discovery Across Domains

    NASA Astrophysics Data System (ADS)

    Biard, J. C.; Yu, J.; Hedley, M.; Cox, S. J. D.; Leadbetter, A.; Car, N. J.; Druken, K. A.; Nativi, S.; Davis, E.

    2016-12-01

    Geophysical data communities are publishing large quantities of data across a wide variety of scientific domains which are overlapping more and more. Whilst netCDF is a common format for many of these communities, it is only one of a large number of data storage and transfer formats. One of the major challenges ahead is finding ways to leverage these diverse data sets to advance our understanding of complex problems. We describe a methodology for incorporating Resource Description Framework (RDF) triples into netCDF files called netCDF-LD (netCDF Linked Data). NetCDF-LD explicitly connects the contents of netCDF files - both data and metadata, with external web-based resources, including vocabularies, standards definitions, and data collections, and through them, a whole host of related information. This approach also preserves and enhances the self describing essence of the netCDF format and its metadata, whilst addressing the challenge of integrating various conventions into files. We present a case study illustrating how reasoning over RDF graphs can empower researchers to discover datasets across domain boundaries.

  3. On-the-fly form generation and on-line metadata configuration--a clinical data management Web infrastructure in Java.

    PubMed

    Beck, Peter; Truskaller, Thomas; Rakovac, Ivo; Cadonna, Bruno; Pieber, Thomas R

    2006-01-01

    In this paper we describe the approach to build a web-based clinical data management infrastructure on top of an entity-attribute-value (EAV) database which provides for flexible definition and extension of clinical data sets as well as efficient data handling and high performance query execution. A "mixed" EAV implementation provides a flexible and configurable data repository and at the same time utilizes the performance advantages of conventional database tables for rarely changing data structures. A dynamically configurable data dictionary contains further information for data validation. The online user interface can also be assembled dynamically. A data transfer object which encapsulates data together with all required metadata is populated by the backend and directly used to dynamically render frontend forms and handle incoming data. The "mixed" EAV model enables flexible definition and modification of clinical data sets while reducing performance drawbacks of pure EAV implementations to a minimum. The system currently is in use in an electronic patient record with focus on flexibility and a quality management application (www.healthgate.at) with high performance requirements.

  4. Creating Access Points to Instrument-Based Atmospheric Data: Perspectives from the ARM Metadata Manager

    NASA Astrophysics Data System (ADS)

    Troyan, D.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) program has been collecting data from instruments in diverse climate regions for nearly twenty-five years. These data are made available to all interested parties at no cost via specially designed tools found on the ARM website (www.arm.gov). Metadata is created and applied to the various datastreams to facilitate information retrieval using the ARM website, the ARM Data Discovery Tool, and data quality reporting tools. Over the last year, the Metadata Manager - a relatively new position within the ARM program - created two documents that summarize the state of ARM metadata processes: ARM Metadata Workflow, and ARM Metadata Standards. These documents serve as guides to the creation and management of ARM metadata. With many of ARM's data functions spread around the Department of Energy national laboratory complex and with many of the original architects of the metadata structure no longer working for ARM, there is increased importance on using these documents to resolve issues from data flow bottlenecks and inaccurate metadata to improving data discovery and organizing web pages. This presentation will provide some examples from the workflow and standards documents. The examples will illustrate the complexity of the ARM metadata processes and the efficiency by which the metadata team works towards achieving the goal of providing access to data collected under the auspices of the ARM program.

  5. Efficient processing of MPEG-21 metadata in the binary domain

    NASA Astrophysics Data System (ADS)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  6. A model for enhancing Internet medical document retrieval with "medical core metadata".

    PubMed

    Malet, G; Munoz, F; Appleyard, R; Hersh, W

    1999-01-01

    Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and MEDLINE-type content descriptions. The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines.

  7. Taming Big Data Variety in the Earth Observing System Data and Information System

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Walter, Jeff

    2015-01-01

    Although the volume of the remote sensing data managed by the Earth Observing System Data and Information System is formidable, an oft-overlooked challenge is the variety of data. The diversity in satellite instruments, science disciplines and user communities drives cost as much or more as the data volume. Several strategies are used to tame this variety: data allocation to distinct centers of expertise; a common metadata repository for discovery, data format standards and conventions; and services that further abstract the variations in data.

  8. The New Online Metadata Editor for Generating Structured Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Shrestha, B.; Palanisamy, G.; Hook, L.; Killeffer, T.; Boden, T.; Cook, R. B.; Zolly, L.; Hutchison, V.; Frame, M. T.; Cialella, A. T.; Lazer, K.

    2014-12-01

    Nobody is better suited to "describe" data than the scientist who created it. This "description" about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, and locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [1] [2]. Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. How is OME helping Big Data Centers like ORNL DAAC? The ORNL DAAC is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers managed by the ESDIS Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and interoperability.References:[1] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94. [2] Wilson, Bruce E., et al. "Mercury Toolset for Spatiotemporal Metadata." NASA Technical Reports Server (NTRS) (2010).

  9. Metadata Wizard: an easy-to-use tool for creating FGDC-CSDGM metadata for geospatial datasets in ESRI ArcGIS Desktop

    USGS Publications Warehouse

    Ignizio, Drew A.; O'Donnell, Michael S.; Talbert, Colin B.

    2014-01-01

    Creating compliant metadata for scientific data products is mandated for all federal Geographic Information Systems professionals and is a best practice for members of the geospatial data community. However, the complexity of the The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata, the limited availability of easy-to-use tools, and recent changes in the ESRI software environment continue to make metadata creation a challenge. Staff at the U.S. Geological Survey Fort Collins Science Center have developed a Python toolbox for ESRI ArcDesktop to facilitate a semi-automated workflow to create and update metadata records in ESRI’s 10.x software. The U.S. Geological Survey Metadata Wizard tool automatically populates several metadata elements: the spatial reference, spatial extent, geospatial presentation format, vector feature count or raster column/row count, native system/processing environment, and the metadata creation date. Once the software auto-populates these elements, users can easily add attribute definitions and other relevant information in a simple Graphical User Interface. The tool, which offers a simple design free of esoteric metadata language, has the potential to save many government and non-government organizations a significant amount of time and costs by facilitating the development of The Federal Geographic Data Committee’s Content Standards for Digital Geospatial Metadata compliant metadata for ESRI software users. A working version of the tool is now available for ESRI ArcDesktop, version 10.0, 10.1, and 10.2 (downloadable at http:/www.sciencebase.gov/metadatawizard).

  10. The Ophidia framework: toward cloud-based data analytics for climate change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.

  11. Improving Scientific Metadata Interoperability And Data Discoverability using OAI-PMH

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James M.; Wilson, Bruce E.

    2010-12-01

    While general-purpose search engines (such as Google or Bing) are useful for finding many things on the Internet, they are often of limited usefulness for locating Earth Science data relevant (for example) to a specific spatiotemporal extent. By contrast, tools that search repositories of structured metadata can locate relevant datasets with fairly high precision, but the search is limited to that particular repository. Federated searches (such as Z39.50) have been used, but can be slow and the comprehensiveness can be limited by downtime in any search partner. An alternative approach to improve comprehensiveness is for a repository to harvest metadata from other repositories, possibly with limits based on subject matter or access permissions. Searches through harvested metadata can be extremely responsive, and the search tool can be customized with semantic augmentation appropriate to the community of practice being served. However, there are a number of different protocols for harvesting metadata, with some challenges for ensuring that updates are propagated and for collaborations with repositories using differing metadata standards. The Open Archive Initiative Protocol for Metadata Handling (OAI-PMH) is a standard that is seeing increased use as a means for exchanging structured metadata. OAI-PMH implementations must support Dublin Core as a metadata standard, with other metadata formats as optional. We have developed tools which enable our structured search tool (Mercury; http://mercury.ornl.gov) to consume metadata from OAI-PMH services in any of the metadata formats we support (Dublin Core, Darwin Core, FCDC CSDGM, GCMD DIF, EML, and ISO 19115/19137). We are also making ORNL DAAC metadata available through OAI-PMH for other metadata tools to utilize, such as the NASA Global Change Master Directory, GCMD). This paper describes Mercury capabilities with multiple metadata formats, in general, and, more specifically, the results of our OAI-PMH implementations and the lessons learned. References: [1] R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. [2] R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010). [3] Devarakonda, R.; Palanisamy, G.; Green, J.; Wilson, B. E. "Mercury: An Example of Effective Software Reuse for Metadata Management Data Discovery and Access", Eos Trans. AGU, 89(53), Fall Meet. Suppl., IN11A-1019 (2008).

  12. Towards the Goal of Modular Climate Data Services: An Overview of NCPP Applications and Software

    NASA Astrophysics Data System (ADS)

    Koziol, B. W.; Cinquini, L.; Treshansky, A.; Murphy, S.; DeLuca, C.

    2013-12-01

    In August 2013, the National Climate Predictions and Projections Platform (NCPP) organized a workshop focusing on the quantitative evaluation of downscaled climate data products (QED-2013). The QED-2013 workshop focused on real-world application problems drawn from several sectors (e.g. hydrology, ecology, environmental health, agriculture), and required that downscaled downscaled data products be dynamically accessed, generated, manipulated, annotated, and evaluated. The cyberinfrastructure elements that were integrated to support the workshop included (1) a wiki-based project hosting environment (Earth System CoG) with an interface to data services provided by an Earth System Grid Federation (ESGF) data node; (2) metadata tools provided by the Earth System Documentation (ES-DOC) collaboration; and (3) a Python-based library OpenClimateGIS (OCGIS) for subsetting and converting NetCDF-based climate data to GIS and tabular formats. Collectively, this toolset represents a first deployment of a 'ClimateTranslator' that enables users to access, interpret, and apply climate information at local and regional scales. This presentation will provide an overview of these components above, how they were used in the workshop, and discussion of current and potential integration. The long-term strategy for this software stack is to offer the suite of services described on a customizable, per-project basis. Additional detail on the three components is below. (1) Earth System CoG is a web-based collaboration environment that integrates data discovery and access services with tools for supporting governance and the organization of information. QED-2013 utilized these capabilities to share with workshop participants a suite of downscaled datasets, associated images derived from those datasets, and metadata files describing the downscaling techniques involved. The collaboration side of CoG was used for workshop organization, discussion, and results. (2) The ES-DOC Questionnaire, Viewer, and Comparator are web-based tools for the creation and use of model and experiment documentation. Workshop participants used the Questionnaire to generate metadata on regional downscaling models and statistical downscaling methods, and the Viewer to display the results. A prototype Comparator was available to compare properties across dynamically downscaled models. (3) OCGIS is a Python (v2.7) package designed for geospatial manipulation, subsetting, computation, and translation of Climate and Forecasting (CF)-compliant climate datasets - either stored in local NetCDF files, or files served through THREDDS data servers.

  13. An Approach to Information Management for AIR7000 with Metadata and Ontologies

    DTIC Science & Technology

    2009-10-01

    metadata. We then propose an approach based on Semantic Technologies including the Resource Description Framework (RDF) and Upper Ontologies, for the...mandating specific metadata schemas can result in interoperability problems. For example, many standards within the ADO mandate the use of XML for metadata...such problems, we propose an archi- tecture in which different metadata schemes can inter operate. By using RDF (Resource Description Framework ) as a

  14. Making Interoperability Easier with NASA's Metadata Management Tool (MMT)

    NASA Technical Reports Server (NTRS)

    Shum, Dana; Reese, Mark; Pilone, Dan; Baynes, Katie

    2016-01-01

    While the ISO-19115 collection level metadata format meets many users' needs for interoperable metadata, it can be cumbersome to create it correctly. Through the MMT's simple UI experience, metadata curators can create and edit collections which are compliant with ISO-19115 without full knowledge of the NASA Best Practices implementation of ISO-19115 format. Users are guided through the metadata creation process through a forms-based editor, complete with field information, validation hints and picklists. Once a record is completed, users can download the metadata in any of the supported formats with just 2 clicks.

  15. Predicting structured metadata from unstructured metadata.

    PubMed

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data-defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. © The Author(s) 2016. Published by Oxford University Press.

  16. Metazen – metadata capture for metagenomes

    DOE PAGES

    Bischof, Jared; Harrison, Travis; Paczian, Tobias; ...

    2014-12-08

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  17. Metazen – metadata capture for metagenomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bischof, Jared; Harrison, Travis; Paczian, Tobias

    Background: As the impact and prevalence of large-scale metagenomic surveys grow, so does the acute need for more complete and standards compliant metadata. Metadata (data describing data) provides an essential complement to experimental data, helping to answer questions about its source, mode of collection, and reliability. Metadata collection and interpretation have become vital to the genomics and metagenomics communities, but considerable challenges remain, including exchange, curation, and distribution. Currently, tools are available for capturing basic field metadata during sampling, and for storing, updating and viewing it. These tools are not specifically designed for metagenomic surveys; in particular, they lack themore » appropriate metadata collection templates, a centralized storage repository, and a unique ID linking system that can be used to easily port complete and compatible metagenomic metadata into widely used assembly and sequence analysis tools. Results: Metazen was developed as a comprehensive framework designed to enable metadata capture for metagenomic sequencing projects. Specifically, Metazen provides a rapid, easy-to-use portal to encourage early deposition of project and sample metadata. Conclusion: Metazen is an interactive tool that aids users in recording their metadata in a complete and valid format. A defined set of mandatory fields captures vital information, while the option to add fields provides flexibility.« less

  18. Predicting structured metadata from unstructured metadata

    PubMed Central

    Posch, Lisa; Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2016-01-01

    Enormous amounts of biomedical data have been and are being produced by investigators all over the world. However, one crucial and limiting factor in data reuse is accurate, structured and complete description of the data or data about the data—defined as metadata. We propose a framework to predict structured metadata terms from unstructured metadata for improving quality and quantity of metadata, using the Gene Expression Omnibus (GEO) microarray database. Our framework consists of classifiers trained using term frequency-inverse document frequency (TF-IDF) features and a second approach based on topics modeled using a Latent Dirichlet Allocation model (LDA) to reduce the dimensionality of the unstructured data. Our results on the GEO database show that structured metadata terms can be the most accurately predicted using the TF-IDF approach followed by LDA both outperforming the majority vote baseline. While some accuracy is lost by the dimensionality reduction of LDA, the difference is small for elements with few possible values, and there is a large improvement over the majority classifier baseline. Overall this is a promising approach for metadata prediction that is likely to be applicable to other datasets and has implications for researchers interested in biomedical metadata curation and metadata prediction. Database URL: http://www.yeastgenome.org/ PMID:28637268

  19. Enhanced Imaging of Building Interior for Portable MIMO Through-the-wall Radar

    NASA Astrophysics Data System (ADS)

    Song, Yongping; Zhu, Jiahua; Hu, Jun; Jin, Tian; Zhou, Zhimin

    2018-01-01

    Portable multi-input multi-output (MIMO) radar system is able to imaging the building interior through aperture synthesis. However, significant grating lobes are invoked in the directly imaging results, which may deteriorate the imaging quality of other targets and influence the detail information extraction of imaging scene. In this paper, a two-stage coherence factor (CF) weighting method is proposed to enhance the imaging quality. After obtaining the sub-imaging results of each spatial sampling position using conventional CF approach, a window function is employed to calculate the proposed “enhanced CF” adaptive to the spatial variety effect behind the wall for the combination of these sub-images. The real data experiment illustrates the better performance of proposed method on grating lobes suppression and imaging quality enhancement compare to the traditional radar imaging approach.

  20. Reducing Line Edge Roughness in Si and SiN through plasma etch chemistry optimization for photonic waveguide applications

    NASA Astrophysics Data System (ADS)

    Marchack, Nathan; Khater, Marwan; Orcutt, Jason; Chang, Josephine; Holmes, Steven; Barwicz, Tymon; Kamlapurkar, Swetha; Green, William; Engelmann, Sebastian

    2017-03-01

    The LER and LWR of subtractively patterned Si and SiN waveguides was calculated after each step in the process. It was found for Si waveguides that adjusting the ratio of CF4:CHF3 during the hard mask open step produced reductions in LER of 26 and 43% from the initial lithography for isolated waveguides patterned with partial and full etches, respectively. However for final LER values of 3.0 and 2.5 nm on fully etched Si waveguides, the corresponding optical loss measurements were indistinguishable. For SiN waveguides, introduction of C4H9F to the conventional CF4/CHF3 measurement was able to reduce the mask height budget by a factor of 5, while reducing LER from the initial lithography by 26%.

  1. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2005-01-01

    This paper considers the deceptively simple question: Why can't digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  2. Why can't I manage my digital images like MP3s? The evolution and intent of multimedia metadata

    NASA Astrophysics Data System (ADS)

    Goodrum, Abby; Howison, James

    2004-12-01

    This paper considers the deceptively simple question: Why can"t digital images be managed in the simple and effective manner in which digital music files are managed? We make the case that the answer is different treatments of metadata in different domains with different goals. A central difference between the two formats stems from the fact that digital music metadata lookup services are collaborative and automate the movement from a digital file to the appropriate metadata, while image metadata services do not. To understand why this difference exists we examine the divergent evolution of metadata standards for digital music and digital images and observed that the processes differ in interesting ways according to their intent. Specifically music metadata was developed primarily for personal file management and community resource sharing, while the focus of image metadata has largely been on information retrieval. We argue that lessons from MP3 metadata can assist individuals facing their growing personal image management challenges. Our focus therefore is not on metadata for cultural heritage institutions or the publishing industry, it is limited to the personal libraries growing on our hard-drives. This bottom-up approach to file management combined with p2p distribution radically altered the music landscape. Might such an approach have a similar impact on image publishing? This paper outlines plans for improving the personal management of digital images-doing image metadata and file management the MP3 way-and considers the likelihood of success.

  3. The Role of Metadata Standards in EOSDIS Search and Retrieval Applications

    NASA Technical Reports Server (NTRS)

    Pfister, Robin

    1999-01-01

    Metadata standards play a critical role in data search and retrieval systems. Metadata tie software to data so the data can be processed, stored, searched, retrieved and distributed. Without metadata these actions are not possible. The process of populating metadata to describe science data is an important service to the end user community so that a user who is unfamiliar with the data, can easily find and learn about a particular dataset before an order decision is made. Once a good set of standards are in place, the accuracy with which data search can be performed depends on the degree to which metadata standards are adhered during product definition. NASA's Earth Observing System Data and Information System (EOSDIS) provides examples of how metadata standards are used in data search and retrieval.

  4. Perceptual evaluation of color transformed multispectral imagery

    NASA Astrophysics Data System (ADS)

    Toet, Alexander; de Jong, Michael J.; Hogervorst, Maarten A.; Hooge, Ignace T. C.

    2014-04-01

    Color remapping can give multispectral imagery a realistic appearance. We assessed the practical value of this technique in two observer experiments using monochrome intensified (II) and long-wave infrared (IR) imagery, and color daylight (REF) and fused multispectral (CF) imagery. First, we investigated the amount of detail observers perceive in a short timespan. REF and CF imagery yielded the highest precision and recall measures, while II and IR imagery yielded significantly lower values. This suggests that observers have more difficulty in extracting information from monochrome than from color imagery. Next, we measured eye fixations during free image exploration. Although the overall fixation behavior was similar across image modalities, the order in which certain details were fixated varied. Persons and vehicles were typically fixated first in REF, CF, and IR imagery, while they were fixated later in II imagery. In some cases, color remapping II imagery and fusion with IR imagery restored the fixation order of these image details. We conclude that color remapping can yield enhanced scene perception compared to conventional monochrome nighttime imagery, and may be deployed to tune multispectral image representations such that the resulting fixation behavior resembles the fixation behavior corresponding to daylight color imagery.

  5. Construction of three-dimensional graphene interfaces into carbon fiber textiles for increasing deposition of nickel nanoparticles: flexible hierarchical magnetic textile composites for strong electromagnetic shielding

    NASA Astrophysics Data System (ADS)

    Bian, Xing-Ming; Liu, Lin; Li, Hai-Bing; Wang, Chan-Yuan; Xie, Qing; Zhao, Quan-Liang; Bi, Song; Hou, Zhi-Ling

    2017-01-01

    Since manipulating electromagnetic waves with electromagnetic active materials for environmental and electric engineering is a significant task, here a novel prototype is reported by introducing reduced graphene oxide (RGO) interfaces in carbon fiber (CF) networks for a hierarchical carbon fiber/reduced graphene oxide/nickel (CF-RGO-Ni) composite textile. Upon charaterizations of the microscopic morphologies, electrical and magnetic properties, the presence of three-dimensional RGO interfaces and bifunctional nickel nanoparticles substantially influences the related physical properties in the resulting hierarchical composite textiles. Eletromagnetic interference (EMI) shielding performance suggests that the hierarchical composite textiles hold a strong shielding effectiveness greater than 61 dB, showing greater advantages than conventional polymeric and foamy shielding composites. As a polymer-free lightweight structure, flexible CF-RGO-Ni composites of all electromagnetic active components offer unique understanding of the multi-scale and multiple mechanisms in electromagnetic energy consumption. Such a novel prototype of shielding structures along with convenient technology highlight a strategy to achieve high-performance EMI shielding, coupled with a universal approach for preparing advanced lightweight composites with graphene interfaces.

  6. Construction of three-dimensional graphene interfaces into carbon fiber textiles for increasing deposition of nickel nanoparticles: flexible hierarchical magnetic textile composites for strong electromagnetic shielding.

    PubMed

    Bian, Xing-Ming; Liu, Lin; Li, Hai-Bing; Wang, Chan-Yuan; Xie, Qing; Zhao, Quan-Liang; Bi, Song; Hou, Zhi-Ling

    2017-01-27

    Since manipulating electromagnetic waves with electromagnetic active materials for environmental and electric engineering is a significant task, here a novel prototype is reported by introducing reduced graphene oxide (RGO) interfaces in carbon fiber (CF) networks for a hierarchical carbon fiber/reduced graphene oxide/nickel (CF-RGO-Ni) composite textile. Upon charaterizations of the microscopic morphologies, electrical and magnetic properties, the presence of three-dimensional RGO interfaces and bifunctional nickel nanoparticles substantially influences the related physical properties in the resulting hierarchical composite textiles. Eletromagnetic interference (EMI) shielding performance suggests that the hierarchical composite textiles hold a strong shielding effectiveness greater than 61 dB, showing greater advantages than conventional polymeric and foamy shielding composites. As a polymer-free lightweight structure, flexible CF-RGO-Ni composites of all electromagnetic active components offer unique understanding of the multi-scale and multiple mechanisms in electromagnetic energy consumption. Such a novel prototype of shielding structures along with convenient technology highlight a strategy to achieve high-performance EMI shielding, coupled with a universal approach for preparing advanced lightweight composites with graphene interfaces.

  7. openPDS: protecting the privacy of metadata through SafeAnswers.

    PubMed

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research.

  8. openPDS: Protecting the Privacy of Metadata through SafeAnswers

    PubMed Central

    de Montjoye, Yves-Alexandre; Shmueli, Erez; Wang, Samuel S.; Pentland, Alex Sandy

    2014-01-01

    The rise of smartphones and web services made possible the large-scale collection of personal metadata. Information about individuals' location, phone call logs, or web-searches, is collected and used intensively by organizations and big data researchers. Metadata has however yet to realize its full potential. Privacy and legal concerns, as well as the lack of technical solutions for personal metadata management is preventing metadata from being shared and reconciled under the control of the individual. This lack of access and control is furthermore fueling growing concerns, as it prevents individuals from understanding and managing the risks associated with the collection and use of their data. Our contribution is two-fold: (1) we describe openPDS, a personal metadata management framework that allows individuals to collect, store, and give fine-grained access to their metadata to third parties. It has been implemented in two field studies; (2) we introduce and analyze SafeAnswers, a new and practical way of protecting the privacy of metadata at an individual level. SafeAnswers turns a hard anonymization problem into a more tractable security one. It allows services to ask questions whose answers are calculated against the metadata instead of trying to anonymize individuals' metadata. The dimensionality of the data shared with the services is reduced from high-dimensional metadata to low-dimensional answers that are less likely to be re-identifiable and to contain sensitive information. These answers can then be directly shared individually or in aggregate. openPDS and SafeAnswers provide a new way of dynamically protecting personal metadata, thereby supporting the creation of smart data-driven services and data science research. PMID:25007320

  9. Progress in defining a standard for file-level metadata

    NASA Technical Reports Server (NTRS)

    Williams, Joel; Kobler, Ben

    1996-01-01

    In the following narrative, metadata required to locate a file on tape or collection of tapes will be referred to as file-level metadata. This paper discribes the rationale for and the history of the effort to define a standard for this metadata.

  10. Achieving interoperability for metadata registries using comparative object modeling.

    PubMed

    Park, Yu Rang; Kim, Ju Han

    2010-01-01

    Achieving data interoperability between organizations relies upon agreed meaning and representation (metadata) of data. For managing and registering metadata, many organizations have built metadata registries (MDRs) in various domains based on international standard for MDR framework, ISO/IEC 11179. Following this trend, two pubic MDRs in biomedical domain have been created, United States Health Information Knowledgebase (USHIK) and cancer Data Standards Registry and Repository (caDSR), from U.S. Department of Health & Human Services and National Cancer Institute (NCI), respectively. Most MDRs are implemented with indiscriminate extending for satisfying organization-specific needs and solving semantic and structural limitation of ISO/IEC 11179. As a result it is difficult to address interoperability among multiple MDRs. In this paper, we propose an integrated metadata object model for achieving interoperability among multiple MDRs. To evaluate this model, we developed an XML Schema Definition (XSD)-based metadata exchange format. We created an XSD-based metadata exporter, supporting both the integrated metadata object model and organization-specific MDR formats.

  11. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  12. Making metadata usable in a multi-national research setting.

    PubMed

    Ellul, Claire; Foord, Joanna; Mooney, John

    2013-11-01

    SECOA (Solutions for Environmental Contrasts in Coastal Areas) is a multi-national research project examining the effects of human mobility on urban settlements in fragile coastal environments. This paper describes the setting up of a SECOA metadata repository for non-specialist researchers such as environmental scientists and tourism experts. Conflicting usability requirements of two groups - metadata creators and metadata users - are identified along with associated limitations of current metadata standards. A description is given of a configurable metadata system designed to grow as the project evolves. This work is of relevance for similar projects such as INSPIRE. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Inter-University Upper Atmosphere Global Observation Network (IUGONET) Metadata Database and Its Interoperability

    NASA Astrophysics Data System (ADS)

    Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.

    2013-12-01

    The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for making/linking metadata databases. The development of effective data systems will contribute to the progress of scientific research on solar terrestrial physics, climate and the geophysical environment. Any kind of cooperation, metadata input and feedback, especially for linkage of the databases, is welcomed. References 1. Hayashi, H. et al., Inter-university Upper Atmosphere Global Observation Network (IUGONET), Data Sci. J., 12, WDS179-184, 2013. 2. King, T. et al., SPASE 2.0: A standard data model for space physics. Earth Sci. Inform. 3, 67-73, 2010, doi:10.1007/s12145-010-0053-4. 3. Hori, T., et al., Development of IUGONET metadata format and metadata management system. J. Space Sci. Info. Jpn., 105-111, 2012. (in Japanese)

  14. Towards Precise Metadata-set for Discovering 3D Geospatial Models in Geo-portals

    NASA Astrophysics Data System (ADS)

    Zamyadi, A.; Pouliot, J.; Bédard, Y.

    2013-09-01

    Accessing 3D geospatial models, eventually at no cost and for unrestricted use, is certainly an important issue as they become popular among participatory communities, consultants, and officials. Various geo-portals, mainly established for 2D resources, have tried to provide access to existing 3D resources such as digital elevation model, LIDAR or classic topographic data. Describing the content of data, metadata is a key component of data discovery in geo-portals. An inventory of seven online geo-portals and commercial catalogues shows that the metadata referring to 3D information is very different from one geo-portal to another as well as for similar 3D resources in the same geo-portal. The inventory considered 971 data resources affiliated with elevation. 51% of them were from three geo-portals running at Canadian federal and municipal levels whose metadata resources did not consider 3D model by any definition. Regarding the remaining 49% which refer to 3D models, different definition of terms and metadata were found, resulting in confusion and misinterpretation. The overall assessment of these geo-portals clearly shows that the provided metadata do not integrate specific and common information about 3D geospatial models. Accordingly, the main objective of this research is to improve 3D geospatial model discovery in geo-portals by adding a specific metadata-set. Based on the knowledge and current practices on 3D modeling, and 3D data acquisition and management, a set of metadata is proposed to increase its suitability for 3D geospatial models. This metadata-set enables the definition of genuine classes, fields, and code-lists for a 3D metadata profile. The main structure of the proposal contains 21 metadata classes. These classes are classified in three packages as General and Complementary on contextual and structural information, and Availability on the transition from storage to delivery format. The proposed metadata set is compared with Canadian Geospatial Data Infrastructure (CGDI) metadata which is an implementation of North American Profile of ISO-19115. The comparison analyzes the two metadata against three simulated scenarios about discovering needed 3D geo-spatial datasets. Considering specific metadata about 3D geospatial models, the proposed metadata-set has six additional classes on geometric dimension, level of detail, geometric modeling, topology, and appearance information. In addition classes on data acquisition, preparation, and modeling, and physical availability have been specialized for 3D geospatial models.

  15. Interactive Visualization Systems and Data Integration Methods for Supporting Discovery in Collections of Scientific Information

    DTIC Science & Technology

    2011-05-01

    iTunes illustrate the difference between the centralized approach of digital library systems and the distributed approach of container file formats...metadata in a container file format. Apple’s iTunes uses a centralized metadata approach and allows users to maintain song metadata in a single...one iTunes library to another the metadata must be copied separately or reentered in the new library. This demonstrates the utility of storing metadata

  16. Collaborative Metadata Curation in Support of NASA Earth Science Data Stewardship

    NASA Technical Reports Server (NTRS)

    Sisco, Adam W.; Bugbee, Kaylin; le Roux, Jeanne; Staton, Patrick; Freitag, Brian; Dixon, Valerie

    2018-01-01

    Growing collection of NASA Earth science data is archived and distributed by EOSDIS’s 12 Distributed Active Archive Centers (DAACs). Each collection and granule is described by a metadata record housed in the Common Metadata Repository (CMR). Multiple metadata standards are in use, and core elements of each are mapped to and from a common model – the Unified Metadata Model (UMM). Work done by the Analysis and Review of CMR (ARC) Team.

  17. Mitogenome metadata: current trends and proposed standards.

    PubMed

    Strohm, Jeff H T; Gwiazdowski, Rodger A; Hanner, Robert

    2016-09-01

    Mitogenome metadata are descriptive terms about the sequence, and its specimen description that allow both to be digitally discoverable and interoperable. Here, we review a sampling of mitogenome metadata published in the journal Mitochondrial DNA between 2005 and 2014. Specifically, we have focused on a subset of metadata fields that are available for GenBank records, and specified by the Genomics Standards Consortium (GSC) and other biodiversity metadata standards; and we assessed their presence across three main categories: collection, biological and taxonomic information. To do this we reviewed 146 mitogenome manuscripts, and their associated GenBank records, and scored them for 13 metadata fields. We also explored the potential for mitogenome misidentification using their sequence diversity, and taxonomic metadata on the Barcode of Life Datasystems (BOLD). For this, we focused on all Lepidoptera and Perciformes mitogenomes included in the review, along with additional mitogenome sequence data mined from Genbank. Overall, we found that none of 146 mitogenome projects provided all the metadata we looked for; and only 17 projects provided at least one category of metadata across the three main categories. Comparisons using mtDNA sequences from BOLD, suggest that some mitogenomes may be misidentified. Lastly, we appreciate the research potential of mitogenomes announced through this journal; and we conclude with a suggestion of 13 metadata fields, available on GenBank, that if provided in a mitogenomes's GenBank record, would increase their research value.

  18. Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials

    NASA Astrophysics Data System (ADS)

    Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.

    2007-03-01

    In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.

  19. Metadata and Service at the GFZ ISDC Portal

    NASA Astrophysics Data System (ADS)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.

  20. Predicting biomedical metadata in CEDAR: A study of Gene Expression Omnibus (GEO).

    PubMed

    Panahiazar, Maryam; Dumontier, Michel; Gevaert, Olivier

    2017-08-01

    A crucial and limiting factor in data reuse is the lack of accurate, structured, and complete descriptions of data, known as metadata. Towards improving the quantity and quality of metadata, we propose a novel metadata prediction framework to learn associations from existing metadata that can be used to predict metadata values. We evaluate our framework in the context of experimental metadata from the Gene Expression Omnibus (GEO). We applied four rule mining algorithms to the most common structured metadata elements (sample type, molecular type, platform, label type and organism) from over 1.3million GEO records. We examined the quality of well supported rules from each algorithm and visualized the dependencies among metadata elements. Finally, we evaluated the performance of the algorithms in terms of accuracy, precision, recall, and F-measure. We found that PART is the best algorithm outperforming Apriori, Predictive Apriori, and Decision Table. All algorithms perform significantly better in predicting class values than the majority vote classifier. We found that the performance of the algorithms is related to the dimensionality of the GEO elements. The average performance of all algorithm increases due of the decreasing of dimensionality of the unique values of these elements (2697 platforms, 537 organisms, 454 labels, 9 molecules, and 5 types). Our work suggests that experimental metadata such as present in GEO can be accurately predicted using rule mining algorithms. Our work has implications for both prospective and retrospective augmentation of metadata quality, which are geared towards making data easier to find and reuse. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Mercury Toolset for Spatiotemporal Metadata

    NASA Technical Reports Server (NTRS)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  2. Mercury Toolset for Spatiotemporal Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  3. Metadata Realities for Cyberinfrastructure: Data Authors as Metadata Creators

    ERIC Educational Resources Information Center

    Mayernik, Matthew Stephen

    2011-01-01

    As digital data creation technologies become more prevalent, data and metadata management are necessary to make data available, usable, sharable, and storable. Researchers in many scientific settings, however, have little experience or expertise in data and metadata management. In this dissertation, I explore the everyday data and metadata…

  4. Climate Data Provenance Tracking for Just-In-Time Computation

    NASA Astrophysics Data System (ADS)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  5. Content Metadata Standards for Marine Science: A Case Study

    USGS Publications Warehouse

    Riall, Rebecca L.; Marincioni, Fausto; Lightsom, Frances L.

    2004-01-01

    The U.S. Geological Survey developed a content metadata standard to meet the demands of organizing electronic resources in the marine sciences for a broad, heterogeneous audience. These metadata standards are used by the Marine Realms Information Bank project, a Web-based public distributed library of marine science from academic institutions and government agencies. The development and deployment of this metadata standard serve as a model, complete with lessons about mistakes, for the creation of similarly specialized metadata standards for digital libraries.

  6. Calibration-free wavelength-modulation spectroscopy based on a swiftly determined wavelength-modulation frequency response function of a DFB laser.

    PubMed

    Zhao, Gang; Tan, Wei; Hou, Jiajia; Qiu, Xiaodong; Ma, Weiguang; Li, Zhixin; Dong, Lei; Zhang, Lei; Yin, Wangbao; Xiao, Liantuan; Axner, Ove; Jia, Suotang

    2016-01-25

    A methodology for calibration-free wavelength modulation spectroscopy (CF-WMS) that is based upon an extensive empirical description of the wavelength-modulation frequency response (WMFR) of DFB laser is presented. An assessment of the WMFR of a DFB laser by the use of an etalon confirms that it consists of two parts: a 1st harmonic component with an amplitude that is linear with the sweep and a nonlinear 2nd harmonic component with a constant amplitude. Simulations show that, among the various factors that affect the line shape of a background-subtracted peak-normalized 2f signal, such as concentration, phase shifts between intensity modulation and frequency modulation, and WMFR, only the last factor has a decisive impact. Based on this and to avoid the impractical use of an etalon, a novel method to pre-determine the parameters of the WMFR by fitting to a background-subtracted peak-normalized 2f signal has been developed. The accuracy of the new scheme to determine the WMFR is demonstrated and compared with that of conventional methods in CF-WMS by detection of trace acetylene. The results show that the new method provides a four times smaller fitting error than the conventional methods and retrieves concentration more accurately.

  7. Concurrent array-based queue

    DOEpatents

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  8. WGISS-45 International Directory Network (IDN) Report

    NASA Technical Reports Server (NTRS)

    Morahan, Michael

    2018-01-01

    The objective of this presentation is to provide IDN (International Directory Network) updates on features and activities to the Committee on Earth Observation Satellites (CEOS) Working Group on Information Systems and Services (WGISS) and provider community. The following topics will be will be discussed during the presentation: Transition of Providers DIF-9 (Directory Interchange Format-9) to DIF-10 Metadata Records in the Common Metadata Repository (CMR); GCMD (Global Change Master Directory) Keyword Update; DIF-10 and UMM-C (Unified Metadata Model-Collections) Schema Changes; Metadata Validation of Provider Metadata; docBUILDER for Submitting IDN Metadata to the CMR (i.e. Registration); and Mapping WGClimate Essential Climate Variable (ECV) Inventory to IDN Records.

  9. Explorative Analyses of Nursing Research Data.

    PubMed

    Kim, Hyeoneui; Jang, Imho; Quach, Jimmy; Richardson, Alex; Kim, Jaemin; Choi, Jeeyae

    2016-10-26

    As a first step of pursuing the vision of "big data science in nursing," we described the characteristics of nursing research data reported in 194 published nursing studies. We also explored how completely the Version 1 metadata specification of biomedical and healthCAre Data Discovery Index Ecosystem (bioCADDIE) represents these metadata. The metadata items of the nursing studies were all related to one or more of the bioCADDIE metadata entities. However, values of many metadata items of the nursing studies were not sufficiently represented through the bioCADDIE metadata. This was partly due to the differences in the scope of the content that the bioCADDIE metadata are designed to represent. The 194 nursing studies reported a total of 1,181 unique data items, the majority of which take non-numeric values. This indicates the importance of data standardization to enable the integrative analyses of these data to support big data science in nursing. © The Author(s) 2016.

  10. A Model for Enhancing Internet Medical Document Retrieval with “Medical Core Metadata”

    PubMed Central

    Malet, Gary; Munoz, Felix; Appleyard, Richard; Hersh, William

    1999-01-01

    Objective: Finding documents on the World Wide Web relevant to a specific medical information need can be difficult. The goal of this work is to define a set of document content description tags, or metadata encodings, that can be used to promote disciplined search access to Internet medical documents. Design: The authors based their approach on a proposed metadata standard, the Dublin Core Metadata Element Set, which has recently been submitted to the Internet Engineering Task Force. Their model also incorporates the National Library of Medicine's Medical Subject Headings (MeSH) vocabulary and Medline-type content descriptions. Results: The model defines a medical core metadata set that can be used to describe the metadata for a wide variety of Internet documents. Conclusions: The authors propose that their medical core metadata set be used to assign metadata to medical documents to facilitate document retrieval by Internet search engines. PMID:10094069

  11. MPEG-7: standard metadata for multimedia content

    NASA Astrophysics Data System (ADS)

    Chang, Wo

    2005-08-01

    The eXtensible Markup Language (XML) metadata technology of describing media contents has emerged as a dominant mode of making media searchable both for human and machine consumptions. To realize this premise, many online Web applications are pushing this concept to its fullest potential. However, a good metadata model does require a robust standardization effort so that the metadata content and its structure can reach its maximum usage between various applications. An effective media content description technology should also use standard metadata structures especially when dealing with various multimedia contents. A new metadata technology called MPEG-7 content description has merged from the ISO MPEG standards body with the charter of defining standard metadata to describe audiovisual content. This paper will give an overview of MPEG-7 technology and what impact it can bring forth to the next generation of multimedia indexing and retrieval applications.

  12. Quality Assurance for Digital Learning Object Repositories: Issues for the Metadata Creation Process

    ERIC Educational Resources Information Center

    Currier, Sarah; Barton, Jane; O'Beirne, Ronan; Ryan, Ben

    2004-01-01

    Metadata enables users to find the resources they require, therefore it is an important component of any digital learning object repository. Much work has already been done within the learning technology community to assure metadata quality, focused on the development of metadata standards, specifications and vocabularies and their implementation…

  13. A Model for the Creation of Human-Generated Metadata within Communities

    ERIC Educational Resources Information Center

    Brasher, Andrew; McAndrew, Patrick

    2005-01-01

    This paper considers situations for which detailed metadata descriptions of learning resources are necessary, and focuses on human generation of such metadata. It describes a model which facilitates human production of good quality metadata by the development and use of structured vocabularies. Using examples, this model is applied to single and…

  14. Enhancing SCORM Metadata for Assessment Authoring in E-Learning

    ERIC Educational Resources Information Center

    Chang, Wen-Chih; Hsu, Hui-Huang; Smith, Timothy K.; Wang, Chun-Chia

    2004-01-01

    With the rapid development of distance learning and the XML technology, metadata play an important role in e-Learning. Nowadays, many distance learning standards, such as SCORM, AICC CMI, IEEE LTSC LOM and IMS, use metadata to tag learning materials. However, most metadata models are used to define learning materials and test problems. Few…

  15. Development of Health Information Search Engine Based on Metadata and Ontology

    PubMed Central

    Song, Tae-Min; Jin, Dal-Lae

    2014-01-01

    Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907

  16. Development of health information search engine based on metadata and ontology.

    PubMed

    Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae

    2014-04-01

    The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.

  17. Disease incidence and severity of rice plants in conventional chemical fertilizer input compared with organic farming systems

    NASA Astrophysics Data System (ADS)

    Hu, Xue-Feng; Luo, Fan

    2015-04-01

    To study the impacts of different fertilizer applications on rice growth and disease infection, a 3-year field experiment of rice cultivation was carried out in the suburb of Shanghai from 2012-2014. No any pesticides and herbicides were applied during the entire experiment to prevent their disturbance to rice disease. Compared with green (GM) and cake manures (CM), the application of chemical fertilizer (CF) stimulated the photosysthesis and vegetative growth of rice plants more effectively. Chlorophyll content, height and tiller number of the rice plants treated with the CF were generally higher than those treated with the GM and CM and the control; the contents of nitrate (NO3--N), ammonium (NH4+-N), Kjeldahl nitrogen (KN) and soluble protein treated with the CF were also higher than those with the others during the 3-year experiment. The 3-year experiment also indicated that the incidences of stem borers, shreath blight, leaf rollers and planthoppers of the rice treated with the CF were signficantly higher than those treated with the GM and CM and the control. Especially in 2012 and 2014, the incidences of rice pests and diseases treated with the CF were far more severe than those with the others. As a result, the grain yield treated with the CF was not only lower than that treated with the GM and CM, but also lower than that of the no-fertilizer control. This might be attributed to two reasons: Pests favor the rice seedlings with sufficient N-related nutrients caused by CF application; the excessive accumulation of nutrients in the seedlings might have toxic effects and weaken their immune systems, thus making them more vulnerable to pests and diseases. In comparison, the plants treated with a suitable amount of organic manure showed a better capability of disease resistance and grew more healthy. In addition, the incidences of rice pests and diseases might also be related to climatic conditions. Shanghai was hit by strong subtropical storms in the summer of both 2012 and 2014, which might explain a high incidence of rice planthoppers in the two years. While a a continous high-temperature and no-storm climate in the summer of 2013 might lead to the low incidences of planthoppers and other pests and diseases in the year.

  18. Effect of thermally reduced graphene oxide on dynamic mechanical properties of carbon fiber/epoxy composite

    NASA Astrophysics Data System (ADS)

    Adak, Nitai Chandra; Chhetri, Suman; Murmu, Naresh Chandra; Samanta, Pranab; Kuila, Tapas

    2018-03-01

    The Carbon fiber (CF)/epoxy composites are being used in the automotive and aerospace industries owing to their high specific mechanical strength to weight ratio compared to the other conventional metal and alloys. However, the low interfacial adhesion between fiber and polymer matrix results the inter-laminar fracture of the composites. Effects of different carbonaceous nanomaterials i.e., carbon nanotubes (CNT), graphene nanosheets (GNPs), graphene oxide (GO) etc. on the static mechanical properties of the composites were investigated in detail. Only a few works focused on the improvement of the dynamic mechanical of the CF/epoxy composites. Herein, the effect of thermally reduced grapheme oxide (TRGO) on the dynamic mechanical properties of the CF/epoxy composites was investigated. At first, GO was synthesized using modified Hummers method and then reduced the synthesized GO inside a vacuum oven at 800 °C for 5 min. The prepared TRGO was dispersed in the epoxy resin to modify the epoxy matrix. Then, a number of TRGO/CF/epoxy laminates were manufactured incorporating different wt% of TRGO by vacuum assisted resin transfer molding (VARTM) technique. The developed laminates were cured at room temperature for 24 h and then post cured at 120 °C for 2 h. The dynamic mechanical analyzer (DMA 8000 Perkin Elmer) was used to examine the dynamic mechanical properties of the TRGO/CF/epoxy composites according to ASTM D7028. The dimension of the specimen was 44×10×2.4 mm3 for the DMA test. This test was carried out under flexural loading mode (duel cantilever) at a frequency of 1 Hz and amplitude of 50 μm. The temperature was ramped from 30 to 200 °C with a heating rate of 5 °C min-1. The dynamic mechanical analysis of the 0.2 wt% TRGO incorporated CF/epoxy composites showed ~ 96% enhancement in storage modulus and ~ 12 °C increments in glass transition temperature (Tg) compared to the base CF/epoxy composites. The fiber-matrix interaction was studied by Cole-Cole plot analysis. It proved the homogeneous dispersion of the epoxy resin and TRGO. The homogeneous dispersion of the TRGO in the epoxy matrix increased the overall enhancement of the dynamic mechanical properties of the hybrid composites.

  19. A metadata template for ocean acidification data

    NASA Astrophysics Data System (ADS)

    Jiang, L.

    2014-12-01

    Metadata is structured information that describes, explains, and locates an information resource (e.g., data). It is often coarsely described as data about data, and documents information such as what was measured, by whom, when, where, and how it was sampled, analyzed, with what instruments. Metadata is inherent to ensure the survivability and accessibility of the data into the future. With the rapid expansion of biological response ocean acidification (OA) studies, the lack of a common metadata template to document such type of data has become a significant gap for ocean acidification data management efforts. In this paper, we present a metadata template that can be applied to a broad spectrum of OA studies, including those studying the biological responses of organisms on ocean acidification. The "variable metadata section", which includes the variable name, observation type, whether the variable is a manipulation condition or response variable, and the biological subject on which the variable is studied, forms the core of this metadata template. Additional metadata elements, such as principal investigators, temporal and spatial coverage, platforms for the sampling, data citation are essential components to complete the template. We explain the structure of the template, and define many metadata elements that may be unfamiliar to researchers. For that reason, this paper can serve as a user's manual for the template.

  20. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  1. A standard for measuring metadata quality in spectral libraries

    NASA Astrophysics Data System (ADS)

    Rasaiah, B.; Jones, S. D.; Bellman, C.

    2013-12-01

    A standard for measuring metadata quality in spectral libraries Barbara Rasaiah, Simon Jones, Chris Bellman RMIT University Melbourne, Australia barbara.rasaiah@rmit.edu.au, simon.jones@rmit.edu.au, chris.bellman@rmit.edu.au ABSTRACT There is an urgent need within the international remote sensing community to establish a metadata standard for field spectroscopy that ensures high quality, interoperable metadata sets that can be archived and shared efficiently within Earth observation data sharing systems. Metadata are an important component in the cataloguing and analysis of in situ spectroscopy datasets because of their central role in identifying and quantifying the quality and reliability of spectral data and the products derived from them. This paper presents approaches to measuring metadata completeness and quality in spectral libraries to determine reliability, interoperability, and re-useability of a dataset. Explored are quality parameters that meet the unique requirements of in situ spectroscopy datasets, across many campaigns. Examined are the challenges presented by ensuring that data creators, owners, and data users ensure a high level of data integrity throughout the lifecycle of a dataset. Issues such as field measurement methods, instrument calibration, and data representativeness are investigated. The proposed metadata standard incorporates expert recommendations that include metadata protocols critical to all campaigns, and those that are restricted to campaigns for specific target measurements. The implication of semantics and syntax for a robust and flexible metadata standard are also considered. Approaches towards an operational and logistically viable implementation of a quality standard are discussed. This paper also proposes a way forward for adapting and enhancing current geospatial metadata standards to the unique requirements of field spectroscopy metadata quality. [0430] BIOGEOSCIENCES / Computational methods and data processing [0480] BIOGEOSCIENCES / Remote sensing [1904] INFORMATICS / Community standards [1912] INFORMATICS / Data management, preservation, rescue [1926] INFORMATICS / Geospatial [1930] INFORMATICS / Data and information governance [1946] INFORMATICS / Metadata [1952] INFORMATICS / Modeling [1976] INFORMATICS / Software tools and services [9810] GENERAL OR MISCELLANEOUS / New fields

  2. Metadata Design in the New PDS4 Standards - Something for Everybody

    NASA Astrophysics Data System (ADS)

    Raugh, Anne C.; Hughes, John S.

    2015-11-01

    The Planetary Data System (PDS) archives, supports, and distributes data of diverse targets, from diverse sources, to diverse users. One of the core problems addressed by the PDS4 data standard redesign was that of metadata - how to accommodate the increasingly sophisticated demands of search interfaces, analytical software, and observational documentation into label standards without imposing limits and constraints that would impinge on the quality or quantity of metadata that any particular observer or team could supply. And yet, as an archive, PDS must have detailed documentation for the metadata in the labels it supports, or the institutional knowledge encoded into those attributes will be lost - putting the data at risk.The PDS4 metadata solution is based on a three-step approach. First, it is built on two key ISO standards: ISO 11179 "Information Technology - Metadata Registries", which provides a common framework and vocabulary for defining metadata attributes; and ISO 14721 "Space Data and Information Transfer Systems - Open Archival Information System (OAIS) Reference Model", which provides the framework for the information architecture that enforces the object-oriented paradigm for metadata modeling. Second, PDS has defined a hierarchical system that allows it to divide its metadata universe into namespaces ("data dictionaries", conceptually), and more importantly to delegate stewardship for a single namespace to a local authority. This means that a mission can develop its own data model with a high degree of autonomy and effectively extend the PDS model to accommodate its own metadata needs within the common ISO 11179 framework. Finally, within a single namespace - even the core PDS namespace - existing metadata structures can be extended and new structures added to the model as new needs are identifiedThis poster illustrates the PDS4 approach to metadata management and highlights the expected return on the development investment for PDS, users and data preparers.

  3. A SensorML-based Metadata Model and Registry for Ocean Observatories: a Contribution from European Projects NeXOS and FixO3

    NASA Astrophysics Data System (ADS)

    Delory, E.; Jirka, S.

    2016-02-01

    Discovering sensors and observation data is important when enabling the exchange of oceanographic data between observatories and scientists that need the data sets for their work. To better support this discovery process, one task of the European project FixO3 (Fixed-point Open Ocean Observatories) is dealing with the question which elements are needed for developing a better registry for sensors. This has resulted in four items which are addressed by the FixO3 project in cooperation with further European projects such as NeXOS (http://www.nexosproject.eu/). 1.) Metadata description format: To store and retrieve information about sensors and platforms it is necessary to have a common approach how to provide and encode the metadata. For this purpose, the OGC Sensor Model Language (SensorML) 2.0 standard was selected. Especially the opportunity to distinguish between sensor types and instances offers new chances for a more efficient provision and maintenance of sensor metadata. 2.) Conversion of existing metadata into a SensorML 2.0 representation: In order to ensure a sustainable re-use of already provided metadata content (e.g. from ESONET-FixO3 yellow pages), it is important to provide a mechanism which is capable of transforming these already available metadata sets into the new SensorML 2.0 structure. 3.) Metadata editor: To create descriptions of sensors and platforms, it is not possible to expect users to manually edit XML-based description files. Thus, a visual interface is necessary to help during the metadata creation. We will outline a prototype of this editor, building upon the development of the ESONET sensor registry interface. 4.) Sensor Metadata Store: A server is needed that for storing and querying the created sensor descriptions. For this purpose different options exist which will be discussed. In summary, we will present a set of different elements enabling sensor discovery ranging from metadata formats, metadata conversion and editing to metadata storage. Furthermore, the current development status will be demonstrated.

  4. HDF-EOS Dump Tools

    NASA Astrophysics Data System (ADS)

    Prasad, U.; Rahabi, A.

    2001-05-01

    The following utilities developed for HDF-EOS format data dump are of special use for Earth science data for NASA's Earth Observation System (EOS). This poster demonstrates their use and application. The first four tools take HDF-EOS data files as input. HDF-EOS Metadata Dumper - metadmp Metadata dumper extracts metadata from EOS data granules. It operates by simply copying blocks of metadata from the file to the standard output. It does not process the metadata in any way. Since all metadata in EOS granules is encoded in the Object Description Language (ODL), the output of metadmp will be in the form of complete ODL statements. EOS data granules may contain up to three different sets of metadata (Core, Archive, and Structural Metadata). HDF-EOS Contents Dumper - heosls Heosls dumper displays the contents of HDF-EOS files. This utility provides detailed information on the POINT, SWATH, and GRID data sets. in the files. For example: it will list, the Geo-location fields, Data fields and objects. HDF-EOS ASCII Dumper - asciidmp The ASCII dump utility extracts fields from EOS data granules into plain ASCII text. The output from asciidmp should be easily human readable. With minor editing, asciidmp's output can be made ingestible by any application with ASCII import capabilities. HDF-EOS Binary Dumper - bindmp The binary dumper utility dumps HDF-EOS objects in binary format. This is useful for feeding the output of it into existing program, which does not understand HDF, for example: custom software and COTS products. HDF-EOS User Friendly Metadata - UFM The UFM utility tool is useful for viewing ECS metadata. UFM takes an EOSDIS ODL metadata file and produces an HTML report of the metadata for display using a web browser. HDF-EOS METCHECK - METCHECK METCHECK can be invoked from either Unix or Dos environment with a set of command line options that a user might use to direct the tool inputs and output . METCHECK validates the inventory metadata in (.met file) using The Descriptor file (.desc) as the reference. The tool takes (.desc), and (.met) an ODL file as inputs, and generates a simple output file contains the results of the checking process.

  5. Metadata Management on the SCEC PetaSHA Project: Helping Users Describe, Discover, Understand, and Use Simulation Data in a Large-Scale Scientific Collaboration

    NASA Astrophysics Data System (ADS)

    Okaya, D.; Deelman, E.; Maechling, P.; Wong-Barnum, M.; Jordan, T. H.; Meyers, D.

    2007-12-01

    Large scientific collaborations, such as the SCEC Petascale Cyberfacility for Physics-based Seismic Hazard Analysis (PetaSHA) Project, involve interactions between many scientists who exchange ideas and research results. These groups must organize, manage, and make accessible their community materials of observational data, derivative (research) results, computational products, and community software. The integration of scientific workflows as a paradigm to solve complex computations provides advantages of efficiency, reliability, repeatability, choices, and ease of use. The underlying resource needed for a scientific workflow to function and create discoverable and exchangeable products is the construction, tracking, and preservation of metadata. In the scientific workflow environment there is a two-tier structure of metadata. Workflow-level metadata and provenance describe operational steps, identity of resources, execution status, and product locations and names. Domain-level metadata essentially define the scientific meaning of data, codes and products. To a large degree the metadata at these two levels are separate. However, between these two levels is a subset of metadata produced at one level but is needed by the other. This crossover metadata suggests that some commonality in metadata handling is needed. SCEC researchers are collaborating with computer scientists at SDSC, the USC Information Sciences Institute, and Carnegie Mellon Univ. in order to perform earthquake science using high-performance computational resources. A primary objective of the "PetaSHA" collaboration is to perform physics-based estimations of strong ground motion associated with real and hypothetical earthquakes located within Southern California. Construction of 3D earth models, earthquake representations, and numerical simulation of seismic waves are key components of these estimations. Scientific workflows are used to orchestrate the sequences of scientific tasks and to access distributed computational facilities such as the NSF TeraGrid. Different types of metadata are produced and captured within the scientific workflows. One workflow within PetaSHA ("Earthworks") performs a linear sequence of tasks with workflow and seismological metadata preserved. Downstream scientific codes ingest these metadata produced by upstream codes. The seismological metadata uses attribute-value pairing in plain text; an identified need is to use more advanced handling methods. Another workflow system within PetaSHA ("Cybershake") involves several complex workflows in order to perform statistical analysis of ground shaking due to thousands of hypothetical but plausible earthquakes. Metadata management has been challenging due to its construction around a number of legacy scientific codes. We describe difficulties arising in the scientific workflow due to the lack of this metadata and suggest corrective steps, which in some cases include the cultural shift of domain science programmers coding for metadata.

  6. Detection of early subclinical lung disease in children with cystic fibrosis by lung ventilation imaging with hyperpolarised gas MRI.

    PubMed

    Marshall, Helen; Horsley, Alex; Taylor, Chris J; Smith, Laurie; Hughes, David; Horn, Felix C; Swift, Andrew J; Parra-Robles, Juan; Hughes, Paul J; Norquay, Graham; Stewart, Neil J; Collier, Guilhem J; Teare, Dawn; Cunningham, Steve; Aldag, Ina; Wild, Jim M

    2017-08-01

    Hyperpolarised 3 He ventilation-MRI, anatomical lung MRI, lung clearance index (LCI), low-dose CT and spirometry were performed on 19 children (6-16 years) with clinically stable mild cystic fibrosis (CF) (FEV 1 >-1.96), and 10 controls. All controls had normal spirometry, MRI and LCI. Ventilation-MRI was the most sensitive method of detecting abnormalities, present in 89% of patients with CF, compared with CT abnormalities in 68%, LCI 47% and conventional MRI 22%. Ventilation defects were present in the absence of CT abnormalities and in patients with normal physiology, including LCI. Ventilation-MRI is thus feasible in young children, highly sensitive and provides additional information about lung structure-function relationships. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Analysis of linear energy transfers and quality factors of charged particles produced by spontaneous fission neutrons from 252Cf and 244Pu in the human body.

    PubMed

    Endo, Akira; Sato, Tatsuhiko

    2013-04-01

    Absorbed doses, linear energy transfers (LETs) and quality factors of secondary charged particles in organs and tissues, generated via the interactions of the spontaneous fission neutrons from (252)Cf and (244)Pu within the human body, were studied using the Particle and Heavy Ion Transport Code System (PHITS) coupled with the ICRP Reference Phantom. Both the absorbed doses and the quality factors in target organs generally decrease with increasing distance from the source organ. The analysis of LET distributions of secondary charged particles led to the identification of the relationship between LET spectra and target-source organ locations. A comparison between human body-averaged mean quality factors and fluence-averaged radiation weighting factors showed that the current numerical conventions for the radiation weighting factors of neutrons, updated in ICRP103, and the quality factors for internal exposure are valid.

  8. Performance of Diffusion Aluminide Coatings Applied on Alloy CF8C-Plus at 800oC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Deepak; Dryepondt, Sebastien N; Zhang, Ying

    2012-01-01

    High performance cast stainless steel, CF8C-Plus, is a low cost alloy with prospective applications ranging from covers and casings of small and medium size gas turbines to turbocharger housing and manifolds in internal combustion engines. Diffusion aluminide coatings were applied on this alloy as a potential strategy for improved oxidation resistance, particularly in wet air and steam. In this paper the performance of the aluminide coatings evaluated by cyclic oxidation experiments in air containing 10 vol.% H2O at 800 C and conventional tension-compression low-cycle-fatigue tests in air at 800 C with a strain range of 0.5% is presented. The resultsmore » show that specimens coated by a chemical vapor deposition process provide better oxidation resistance than those coated by an Al-slurry coating process. The application of a coating by pack cementation reduced the fatigue life by 15%.« less

  9. Chromosome characterization and variability in some Iridaceae from Northeastern Brazil

    PubMed Central

    Alves, Lânia Isis F.; Lima, Saulo Antônio A.; Felix, Leonardo P.

    2011-01-01

    The chromosomes of 15 species of Iridaceae of the genera Alophia, Cipura, Eleutherine, Neomarica and Trimezia (subfamily Iridoideae) were examined after conventional Giemsa staining. The karyotypes of Alophia drummondii (2n = 14+1B, 28, 42 and 56), Cipura paludosa (2n = 14), C. xanthomelas (2n = 28) and Eleutherine bulbosa (2n = 12) were asymmetric; Neomarica candida, N. caerulea, N. humilis, N. glauca, N. gracilis, N. northiana and Neomarica sp. (2n = 18); N. cf. paradoxa (2n = 28), Trimezia fosteriana (2n = 52), T. martinicensis (2n = 54) and T. connata (2n = 82) were all generally symmetric. New diploid numbers of 2n = 56 for Alophia drummondii, 2n = 18 for N. candida, N. humilis, N. glauca, and N. gracilis, 2n = 28 for N. cf. paradoxa, and 2n = 82 for T. connata are reported. The karyotypic evolution of the studied species is discussed. PMID:21734827

  10. Evaluating the privacy properties of telephone metadata.

    PubMed

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C

    2016-05-17

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.

  11. Studies of Big Data metadata segmentation between relational and non-relational databases

    NASA Astrophysics Data System (ADS)

    Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.

    2015-12-01

    In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.

  12. Evaluating the privacy properties of telephone metadata

    PubMed Central

    Mayer, Jonathan; Mutchler, Patrick; Mitchell, John C.

    2016-01-01

    Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences. PMID:27185922

  13. Karyotypic diversity in four species of the genus Gymnotus Linnaeus, 1758 (Teleostei, Gymnotiformes, Gymnotidae): physical mapping of ribosomal genes and telomeric sequences

    PubMed Central

    Scacchetti, Priscilla Cardim; Pansonato-Alves, José Carlos; Utsunomia, Ricardo; Oliveira, Claudio; Foresti, Fausto

    2011-01-01

    Abstract Conventional (Giemsa, C-Banding, Ag-NORs, CMA3) and molecular (5S rDNA, 18S rDNA, telomeric sequences) cytogenetic studies were carried out in specimens of ten distinct fish populations of the genus Gymnotus (Gymnotus sylvius Albert and Fernandes-Matioli, 1999, Gymnotus inaequilabiatus Valenciennes, 1839, Gymnotus pantherinus Steindachner, 1908, and G. cf. carapo Linnaeus, 1758) from different Brazilian hydrographic basins. Gymnotus sylvius presented a diploid number of 40 chromosomes (22m+12sm+6st), Gymnotus pantherinus presented 52 chromosomes (32m+18sm+2st), while Gymnotus inaequilabiatus (42m+10sm+2a)and Gymnotus cf. carapo (38m+12sm+4st) presented 54 chromosomes. The C-banding technique revealed centromeric marks in all chromosomes of all species. Besides that, conspicuous blocks of heterochromatin were found interstitially on the chromosomes of Gymnotus inaequilabiatus, Gymnotus cf. carapo,and Gymnotus pantherinus. All four species showed single nucleolus organizing regions confirmed by results obtained through Ag-NORs and FISH experiments using 18S rDNA probes, which showed the NORs localized on the first chromosome pair in Gymnotus inaequilabiatus, Gymnotus cf. carapo,and Gymnotus pantherinus, and on pair 2 in Gymnotus sylvius. CMA3 staining revealed additional unrelated NORs marks in Gymnotus sylvius and Gymnotus pantherinus. The 5S rDNA probes revealed signals on one pair in Gymnotus sylvius and two pairs in Gymnotus pantherinus; Gymnotus inaequilabiatus had about seventeen pairs marked, and Gymnotus cf. carapo had about fifteen pairs marked. It is considered that the high amount of heterochromatin identified in the chromosomes of Gymnotus inaequilabiatus and Gymnotus cf. carapo could have facilitated the dispersion of 5S rDNA in these species. Interstitial signals were detected on the first metacentric pair of Gymnotus sylvius by telomeric probes (TTAGGG)n indicating the possible occurrence of chromosomal fusions in this species. The present study reveals valuable cytotaxonomic markers for this group and allows a more precise evaluation of the processes involved in the karyotype differentiation and the interrelationships among different species of the genus Gymnotus. PMID:24260631

  14. “I think we’ve got too many tests!”: Prenatal providers’ reflections on ethical and clinical challenges in the practice integration of cell-free DNA screening

    PubMed Central

    Gammon, B.L.; Kraft, S.A.; Michie, M.; Allyse, M.

    2016-01-01

    Background The recent introduction of cell-free DNA-based non-invasive prenatal screening (cfDNA screening) into clinical practice was expected to revolutionize prenatal testing. cfDNA screening for fetal aneuploidy has demonstrated higher test sensitivity and specificity for some conditions than conventional serum screening and can be conducted early in the pregnancy. However, it is not clear whether and how clinical practices are assimilating this new type of testing into their informed consent and counselling processes. Since the introduction of cfDNA screening into practice in 2011, the uptake and scope have increased dramatically. Prenatal care providers are under pressure to stay up to date with rapidly changing cfDNA screening panels, manage increasing patient demands, and keep up with changing test costs, all while attempting to use the technology responsibly and ethically. While clinical literature on cfDNA screening has shown benefits for specific patient populations, it has also identified significant misunderstandings among providers and patients alike about the power of the technology. The unique features of cfDNA screening, in comparison to established prenatal testing technologies, have implications for informed decision-making and genetic counselling that must be addressed to ensure ethical practice. Objectives This study explored the experiences of prenatal care providers at the forefront of non-invasive genetic screening in the United States to understand how this testing changes the practice of prenatal medicine. We aimed to learn how the experience of providing and offering this testing differs from established prenatal testing methodologies. These differences may necessitate changes to patient education and consent procedures to maintain ethical practice. Methods We used the online American Congress of Obstetricians and Gynecologists Physician Directory to identify a systematic sample of five prenatal care providers in each U.S. state and the District of Columbia. Beginning with the lowest zip code in each state, we took every fifth name from the directory, excluding providers who were retired, did not currently practice in the state in which they were listed, or were not involved in a prenatal specialty. After repeating this step twice and sending a total of 461 invitations, 37 providers expressed interest in participating, and we completed telephone interviews with 21 providers (4.6%). We developed a semi-structured interview guide including questions about providers’ use of and attitudes toward cfDNA screening. A single interviewer conducted and audio-recorded all interviews by telephone, and the interviews lasted approximately 30 minutes each. We collaboratively developed a codebook through an iterative process of transcript review and code application, and a primary coder coded all transcripts. Results Prenatal care providers have varying perspectives on the advantages of cfDNA screening and express a range of concerns regarding the implementation of cfDNA screening in practice. While providers agreed on several advantages of cfDNA, including increased accuracy, earlier return of results, and decreased risk of complications, many expressed concern that there is not enough time to adequately counsel and educate patients on their prenatal screening and testing options. Providers also agreed that demand for cfDNA screening has increased and expressed a desire for more information from professional societies, labs, and publications. Providers disagreed about the healthcare implications and future of cfDNA screening. Some providers anticipated that cfDNA screening would decrease healthcare costs when implemented widely and expressed optimism for expanded cfDNA screening panels. Others were concerned that cfDNA screening would increase costs over time and questioned whether the expansion to include microdeletions could be done ethically. Conclusions The perspectives and experiences of the providers in this study allow insight into the clinical benefit, burden on prenatal practice, and potential future of cfDNA screening in clinical practice. Given the likelihood that the scope and uptake of cfDNA screening will continue to increase, it is essential to consider how these changes will affect frontline prenatal care providers and, in turn, patients. Providers’ requests for additional guidance and data as well as their concerns with the lack of time available to explain screening and testing options indicate significant potential issues with patient care. It is important to ensure that the clinical integration of cfDNA screening is managed responsibly and ethically before it expands further, exacerbating pre-existing issues. As prenatal screening evolves, so should informed consent and the resources available to women making decisions. The field must take steps to maximize the advantages of cfDNA screening and responsibly manage its ethical issues. PMID:28180146

  15. "I think we've got too many tests!": Prenatal providers' reflections on ethical and clinical challenges in the practice integration of cell-free DNA screening.

    PubMed

    Gammon, B L; Kraft, S A; Michie, M; Allyse, M

    2016-01-01

    The recent introduction of cell-free DNA-based non-invasive prenatal screening (cfDNA screening) into clinical practice was expected to revolutionize prenatal testing. cfDNA screening for fetal aneuploidy has demonstrated higher test sensitivity and specificity for some conditions than conventional serum screening and can be conducted early in the pregnancy. However, it is not clear whether and how clinical practices are assimilating this new type of testing into their informed consent and counselling processes. Since the introduction of cfDNA screening into practice in 2011, the uptake and scope have increased dramatically. Prenatal care providers are under pressure to stay up to date with rapidly changing cfDNA screening panels, manage increasing patient demands, and keep up with changing test costs, all while attempting to use the technology responsibly and ethically. While clinical literature on cfDNA screening has shown benefits for specific patient populations, it has also identified significant misunderstandings among providers and patients alike about the power of the technology. The unique features of cfDNA screening, in comparison to established prenatal testing technologies, have implications for informed decision-making and genetic counselling that must be addressed to ensure ethical practice. This study explored the experiences of prenatal care providers at the forefront of non-invasive genetic screening in the United States to understand how this testing changes the practice of prenatal medicine. We aimed to learn how the experience of providing and offering this testing differs from established prenatal testing methodologies. These differences may necessitate changes to patient education and consent procedures to maintain ethical practice. We used the online American Congress of Obstetricians and Gynecologists Physician Directory to identify a systematic sample of five prenatal care providers in each U.S. state and the District of Columbia. Beginning with the lowest zip code in each state, we took every fifth name from the directory, excluding providers who were retired, did not currently practice in the state in which they were listed, or were not involved in a prenatal specialty. After repeating this step twice and sending a total of 461 invitations, 37 providers expressed interest in participating, and we completed telephone interviews with 21 providers (4.6%). We developed a semi-structured interview guide including questions about providers' use of and attitudes toward cfDNA screening. A single interviewer conducted and audio-recorded all interviews by telephone, and the interviews lasted approximately 30 minutes each. We collaboratively developed a codebook through an iterative process of transcript review and code application, and a primary coder coded all transcripts. Prenatal care providers have varying perspectives on the advantages of cfDNA screening and express a range of concerns regarding the implementation of cfDNA screening in practice. While providers agreed on several advantages of cfDNA, including increased accuracy, earlier return of results, and decreased risk of complications, many expressed concern that there is not enough time to adequately counsel and educate patients on their prenatal screening and testing options. Providers also agreed that demand for cfDNA screening has increased and expressed a desire for more information from professional societies, labs, and publications. Providers disagreed about the healthcare implications and future of cfDNA screening. Some providers anticipated that cfDNA screening would decrease healthcare costs when implemented widely and expressed optimism for expanded cfDNA screening panels. Others were concerned that cfDNA screening would increase costs over time and questioned whether the expansion to include microdeletions could be done ethically. The perspectives and experiences of the providers in this study allow insight into the clinical benefit, burden on prenatal practice, and potential future of cfDNA screening in clinical practice. Given the likelihood that the scope and uptake of cfDNA screening will continue to increase, it is essential to consider how these changes will affect frontline prenatal care providers and, in turn, patients. Providers' requests for additional guidance and data as well as their concerns with the lack of time available to explain screening and testing options indicate significant potential issues with patient care. It is important to ensure that the clinical integration of cfDNA screening is managed responsibly and ethically before it expands further, exacerbating pre-existing issues. As prenatal screening evolves, so should informed consent and the resources available to women making decisions. The field must take steps to maximize the advantages of cfDNA screening and responsibly manage its ethical issues.

  16. Viewing and Editing Earth Science Metadata MOBE: Metadata Object Browser and Editor in Java

    NASA Astrophysics Data System (ADS)

    Chase, A.; Helly, J.

    2002-12-01

    Metadata is an important, yet often neglected aspect of successful archival efforts. However, to generate robust, useful metadata is often a time consuming and tedious task. We have been approaching this problem from two directions: first by automating metadata creation, pulling from known sources of data, and in addition, what this (paper/poster?) details, developing friendly software for human interaction with the metadata. MOBE and COBE(Metadata Object Browser and Editor, and Canonical Object Browser and Editor respectively), are Java applications for editing and viewing metadata and digital objects. MOBE has already been designed and deployed, currently being integrated into other areas of the SIOExplorer project. COBE is in the design and development stage, being created with the same considerations in mind as those for MOBE. Metadata creation, viewing, data object creation, and data object viewing, when taken on a small scale are all relatively simple tasks. Computer science however, has an infamous reputation for transforming the simple into complex. As a system scales upwards to become more robust, new features arise and additional functionality is added to the software being written to manage the system. The software that emerges from such an evolution, though powerful, is often complex and difficult to use. With MOBE the focus is on a tool that does a small number of tasks very well. The result has been an application that enables users to manipulate metadata in an intuitive and effective way. This allows for a tool that serves its purpose without introducing additional cognitive load onto the user, an end goal we continue to pursue.

  17. Managing biomedical image metadata for search and retrieval of similar images.

    PubMed

    Korenblum, Daniel; Rubin, Daniel; Napel, Sandy; Rodriguez, Cesar; Beaulieu, Chris

    2011-08-01

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

  18. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models.

    PubMed

    Misra, Dharitri; Chen, Siyuan; Thoma, George R

    2009-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques.At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts.In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system.

  19. The Metadata Cloud: The Last Piece of a Distributed Data System Model

    NASA Astrophysics Data System (ADS)

    King, T. A.; Cecconi, B.; Hughes, J. S.; Walker, R. J.; Roberts, D.; Thieman, J. R.; Joy, S. P.; Mafi, J. N.; Gangloff, M.

    2012-12-01

    Distributed data systems have existed ever since systems were networked together. Over the years the model for distributed data systems have evolved from basic file transfer to client-server to multi-tiered to grid and finally to cloud based systems. Initially metadata was tightly coupled to the data either by embedding the metadata in the same file containing the data or by co-locating the metadata in commonly named files. As the sources of data multiplied, data volumes have increased and services have specialized to improve efficiency; a cloud system model has emerged. In a cloud system computing and storage are provided as services with accessibility emphasized over physical location. Computation and data clouds are common implementations. Effectively using the data and computation capabilities requires metadata. When metadata is stored separately from the data; a metadata cloud is formed. With a metadata cloud information and knowledge about data resources can migrate efficiently from system to system, enabling services and allowing the data to remain efficiently stored until used. This is especially important with "Big Data" where movement of the data is limited by bandwidth. We examine how the metadata cloud completes a general distributed data system model, how standards play a role and relate this to the existing types of cloud computing. We also look at the major science data systems in existence and compare each to the generalized cloud system model.

  20. Deploying the ODISEES Ontology-guided Search in the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.; Cotnoir, M.; Spaulding, R.; Deardorff, G.

    2016-12-01

    Robust, semantically rich metadata can support data discovery and access, and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Despite this, for users not already familiar with the data in a given archive, most metadata is insufficient to help them find appropriate data for their projects. With this in mind, the Ontology-driven Interactive Search Environment (ODISEES) Data Discovery Portal was developed to enable users to find and download data variables that satisfy precise, parameter-level criteria, even when they know little or nothing about the naming conventions employed by data providers, or where suitable data might be archived. ODISEES relies on an Earth science ontology and metadata repository that provide an ontological framework for describing NASA data holdings with enough detail and fidelity to enable researchers to find, compare and evaluate individual data variables. Users can search for data by indicating the specific parameters desired, and comparing the results in a table that lets them quickly determine which data is most suitable. ODISEES and OLYMPUS, a tool for generating the semantically enhanced metadata used by ODISEES, are being developed in collaboration with the NASA Earth Exchange (NEX) project at the NASA Ames Research Center to prototype a robust data discovery and access service that could be made available to NEX users. NEX is a collaborative platform that provides researchers with access to TB to PB-scale datasets and analysis tools to operate on those data. By integrating ODISEES into the NEX Web Portal we hope to enable NEX users to locate datasets relevant to their research and download them directly into the NAS environment, where they can run applications using those datasets on the NAS supercomputers. This poster will describe the prototype integration of ODISEES into the NEX portal development environment, the mechanism implemented to use NASA APIs to retrieve data, and the approach to transfer data into the NAS supercomputing environment. Finally, we will describe the end-to-end demonstration of the capabilities implemented. This work was funded by the Advanced Information Systems Technology Program of NASA's Research Opportunities in Space and Earth Science.

  1. EMODnet Physics: open and free marine physical data for science and for society

    NASA Astrophysics Data System (ADS)

    Nolan, G.; Novellino, A.; Gorringe, P.; Manzella, G. M. R., Sr.; Schaap, D.; Pouliquen, S.; Richards, L.

    2016-02-01

    Europe is sustaining a long term strategy on Blue Growth, looking at seas and oceans as drivers for innovation and growth. A number of weaknesses have been identified, among which gaps in knowledge and data about the state of our oceans, seabed resources, marine life and risks to habitats and ecosystems. European Marine Observation and Data Network (EMODnet) has been created to improve the usefulness to European users for scientific, regulatory and commercial purposes of observations and the resulting marine data collected and held by European public and private bodies. EMODNet Physics is providing access to archived and real time data catalog on the physical condition in Europe's seas and oceans. The overall objectives are to provide access to archived and near real-time data on physical conditions in Europe's seas and oceans by means of a dedicated portal and to determine how well the data meet the needs of users from industry, public authorities and scientists. EMODnet Physics is contributing to the broader initiative 'Marine Knowledge 2020', and in particular to the implementation of the European Copernicus programme, an EU-wide programme that aims to support policymakers, business, and citizens with improved environmental information. In the global context, Copernicus is an integral part of the Global Earth Observation System of Systems. Near real time data and metadata are populated by data owners, organized at EuroGOOS level according its regional operational systems (ROOSs) infrastructure and conventions and made available with the EMODnet Physics user interface. Latest 60 days are freely viewable and downloadable while the access to older data (monthly archives) request credentials. Archived data series and metadata are organized according and in collaboration with NODCs network (SeaDataNet). Access to data and metadata consider measurements on winds at the sea surface, waves, temperature and salinity, water velocities, light attenuation, sea level and ice coverage. EMODnet Physics has the specific objective of processing physical data into interoperable formats which includes agreed standards, common baselines or reference conditions; assessments of their accuracy and precision. The data and metadata are accessible through an ISO, OGC, INSPIRE compliant portal that is operational 24/7.

  2. Turning Data into Information: Assessing and Reporting GIS Metadata Integrity Using Integrated Computing Technologies

    ERIC Educational Resources Information Center

    Mulrooney, Timothy J.

    2009-01-01

    A Geographic Information System (GIS) serves as the tangible and intangible means by which spatially related phenomena can be created, analyzed and rendered. GIS metadata serves as the formal framework to catalog information about a GIS data set. Metadata is independent of the encoded spatial and attribute information. GIS metadata is a subset of…

  3. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  4. Raising orphans from a metadata morass: A researcher's guide to re-use of public 'omics data.

    PubMed

    Bhandary, Priyanka; Seetharam, Arun S; Arendsee, Zebulun W; Hur, Manhoi; Wurtele, Eve Syrkin

    2018-02-01

    More than 15 petabases of raw RNAseq data is now accessible through public repositories. Acquisition of other 'omics data types is expanding, though most lack a centralized archival repository. Data-reuse provides tremendous opportunity to extract new knowledge from existing experiments, and offers a unique opportunity for robust, multi-'omics analyses by merging metadata (information about experimental design, biological samples, protocols) and data from multiple experiments. We illustrate how predictive research can be accelerated by meta-analysis with a study of orphan (species-specific) genes. Computational predictions are critical to infer orphan function because their coding sequences provide very few clues. The metadata in public databases is often confusing; a test case with Zea mays mRNA seq data reveals a high proportion of missing, misleading or incomplete metadata. This metadata morass significantly diminishes the insight that can be extracted from these data. We provide tips for data submitters and users, including specific recommendations to improve metadata quality by more use of controlled vocabulary and by metadata reviews. Finally, we advocate for a unified, straightforward metadata submission and retrieval system. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Science friction: data, metadata, and collaboration.

    PubMed

    Edwards, Paul N; Mayernik, Matthew S; Batcheller, Archer L; Bowker, Geoffrey C; Borgman, Christine L

    2011-10-01

    When scientists from two or more disciplines work together on related problems, they often face what we call 'science friction'. As science becomes more data-driven, collaborative, and interdisciplinary, demand increases for interoperability among data, tools, and services. Metadata--usually viewed simply as 'data about data', describing objects such as books, journal articles, or datasets--serve key roles in interoperability. Yet we find that metadata may be a source of friction between scientific collaborators, impeding data sharing. We propose an alternative view of metadata, focusing on its role in an ephemeral process of scientific communication, rather than as an enduring outcome or product. We report examples of highly useful, yet ad hoc, incomplete, loosely structured, and mutable, descriptions of data found in our ethnographic studies of several large projects in the environmental sciences. Based on this evidence, we argue that while metadata products can be powerful resources, usually they must be supplemented with metadata processes. Metadata-as-process suggests the very large role of the ad hoc, the incomplete, and the unfinished in everyday scientific work.

  6. Recipes for Semantic Web Dog Food — The ESWC and ISWC Metadata Projects

    NASA Astrophysics Data System (ADS)

    Möller, Knud; Heath, Tom; Handschuh, Siegfried; Domingue, John

    Semantic Web conferences such as ESWC and ISWC offer prime opportunities to test and showcase semantic technologies. Conference metadata about people, papers and talks is diverse in nature and neither too small to be uninteresting or too big to be unmanageable. Many metadata-related challenges that may arise in the Semantic Web at large are also present here. Metadata must be generated from sources which are often unstructured and hard to process, and may originate from many different players, therefore suitable workflows must be established. Moreover, the generated metadata must use appropriate formats and vocabularies, and be served in a way that is consistent with the principles of linked data. This paper reports on the metadata efforts from ESWC and ISWC, identifies specific issues and barriers encountered during the projects, and discusses how these were approached. Recommendations are made as to how these may be addressed in the future, and we discuss how these solutions may generalize to metadata production for the Semantic Web at large.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassianov, Evgueni I.; Riley, Erin A.; Kleiss, Jessica

    Cloud amount is an essential and extensively used macrophysical parameter of cumulus clouds. It is commonly defined as a cloud fraction (CF) from zenith-pointing ground-based active and passive remote sensing. However, conventional retrievals of CF from the remote sensing data with very narrow field-of-view (FOV) may not be representative of the surrounding area. Here we assess its representativeness using an integrated dataset collected at the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program's Southern Great Plains (SGP) site in Oklahoma, USA. For our assessment with focus on selected days with single-layer cumulus clouds (2005-2016), we include the narrow-FOVmore » ARM Active Remotely Sensed Clouds Locations (ARSCL) and large-FOV Total Sky Imager (TSI) cloud products, the 915-MHz Radar Wind Profiler (RWP) measurements of wind speed and direction, and also high-resolution satellite images from Landsat and the Moderate Resolution Imaging Spectroradiometer (MODIS). We demonstrate that a root-mean-square difference (RMSD) between the 15-min averaged ARSCL cloud fraction (CF) and the 15-min averaged TSI fractional sky cover (FSC) is large (up to 0.3). We also discuss how the horizontal distribution of clouds can modify the obtained large RMSD using a new uniformity metric. The latter utilizes the spatial distribution of the FSC over the 100° FOV TSI images obtained with high temporal resolution (30 sec sampling). We demonstrate that cases with more uniform spatial distribution of FSC show better agreement between the narrow-FOV CF and large-FOV FSC, reducing the RMSD by up to a factor of 2.« less

  8. Impact of Donor Arterial Partial Pressure of Oxygen on Outcomes After Lung Transplantation in Adult Cystic Fibrosis Recipients.

    PubMed

    Hayes, Don; Kopp, Benjamin T; Kirkby, Stephen E; Reynolds, Susan D; Mansour, Heidi M; Tobias, Joseph D; Tumin, Dmitry

    2016-08-01

    Donor PaO2 levels are used for assessing organs for lung transplantation (LTx), but survival implications of PaO2 levels in adult cystic fibrosis (CF) patients receiving LTx are unclear. UNOS registry data spanning 2005-2013 were used to test for associations of donor PaO2 with patient survival and bronchiolitis obliterans syndrome (BOS) in adult (age ≥ 18 years) first-time LTx recipients diagnosed with CF. The analysis included 1587 patients, of whom 1420 had complete data for multivariable Cox models. No statistically significant differences among donor PaO2 categories of ≤200, 201-300, 301-400, or >400 mmHg were found in univariate survival analysis (log-rank test p = 0.290). BOS onset did not significantly differ across donor PaO2 categories (Chi-square p = 0.480). Multivariable Cox models of patient survival supported the lack of difference across donor PaO2 categories. Interaction analysis found a modest difference in survival between the two top categories of donor PaO2 when examining patients with body mass index (BMI) in the lowest decile (≤16.5 kg/m(2)). Donor PaO2 was not associated with survival or BOS onset in adult CF patients undergoing LTx. Notwithstanding statistically significant interactions between donor PaO2 and BMI, there was no evidence of post-LTx survival risk associated with donor PaO2 below conventional thresholds in any subgroup of adults with CF.

  9. Personalized or Precision Medicine? The Example of Cystic Fibrosis

    PubMed Central

    Marson, Fernando A. L.; Bertuzzo, Carmen S.; Ribeiro, José D.

    2017-01-01

    The advent of the knowledge on human genetics, by the identification of disease-associated variants, culminated in the understanding of human variability. With the genetic knowledge, the specificity of the clinical phenotype and the drug response of each individual were understood. Using the cystic fibrosis (CF) as an example, the new terms that emerged such as personalized medicine and precision medicine can be characterized. The genetic knowledge in CF is broad and the presence of a monogenic disease caused by mutations in the CFTR gene enables the phenotype–genotype association studies (including the response to drugs), considering the wide clinical and laboratory spectrum dependent on the mutual action of genotype, environment, and lifestyle. Regarding the CF disease, personalized medicine is the treatment directed at the symptoms, and this treatment is adjusted depending on the patient’s phenotype. However, more recently, the term precision medicine began to be widely used, although its correct application and understanding are still vague and poorly characterized. In precision medicine, we understand the individual as a response to the interrelation between environment, lifestyle, and genetic factors, which enabled the advent of new therapeutic models, such as conventional drugs adjustment by individual patient dosage and drug type and response, development of new drugs (read through, broker, enhancer, stabilizer, and amplifier compounds), genome editing by homologous recombination, zinc finger nucleases, TALEN (transcription activator-like effector nuclease), CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats-CRISPR-associated endonuclease 9), and gene therapy. Thus, we introduced the terms personalized medicine and precision medicine based on the CF. PMID:28676762

  10. BASINS Metadata

    EPA Pesticide Factsheets

    Metadata or data about data describes the content, quality, condition, and other characteristics of data. Geospatial metadata are critical to data discovery and serves as the fuel for the Geospatial One-Stop data portal.

  11. HDF4 Maps: For Now and For the Future

    NASA Astrophysics Data System (ADS)

    Plutchak, J.; Aydt, R.; Folk, M. J.

    2013-12-01

    Data formats and access tools necessarily change as technology improves to address emerging requirements with new capabilities. This on-going process inevitably leaves behind significant data collections in legacy formats that are difficult to support and sustain. NASA ESDIS and The HDF Group currently face this problem with large and growing archives of data in HDF4, an older version of the HDF format. Indefinitely guaranteeing the ability to read these data with multi-platform libraries in many languages is very difficult. As an alternative, HDF and NASA worked together to create maps of the files that contain metadata and information about data types, locations, and sizes of data objects in the files. These maps are written in XML and have successfully been used to access and understand data in HDF4 files without the HDF libraries. While originally developed to support sustainable access to these data, these maps can also be used to provide access to HDF4 metadata, facilitate user understanding of files prior to download, and validate the files for compliance with particular conventions. These capabilities are now available as a service for HDF4 archives and users.

  12. Open Access to Geophysical Data

    NASA Astrophysics Data System (ADS)

    Sergeyeva, Nataliya A.; Zabarinskaya, Ludmila P.

    2017-04-01

    Russian World Data Centers for Solar-Terrestrial Physics & Solid Earth Physics hosted by the Geophysical Center of the Russian Academy of Sciences are the Regular Members of the ICSU-World Data System. Guided by the principles of the WDS Constitution and WDS Data Sharing Principles, the WDCs provide full and open access to data, long-term data stewardship, compliance with agreed-upon data standards and conventions, and mechanisms to facilitate and improve access to data. Historical and current geophysical data on different media, in the form of digital data sets, analog records, collections of maps, descriptions are stored and collected in the Centers. The WDCs regularly fill up repositories and database with new data, support them up to date. Now the WDCs focus on four new projects, aimed at increase of data available in network by retrospective data collection and digital preservation of data; creation of a modern system of registration and publication of data with digital object identifier (DOI) assignment, and promotion of data citation culture; creation of databases instead of file system for more convenient access to data; participation in the WDS Metadata Catalogue and Data Portal by creating of metadata for information resources of WDCs.

  13. Hydratools, a MATLAB® based data processing package for Sontek Hydra data

    USGS Publications Warehouse

    Martini, M.; Lightsom, F.L.; Sherwood, C.R.; Xu, Jie; Lacy, J.R.; Ramsey, A.; Horwitz, R.

    2005-01-01

    The U.S. Geological Survey (USGS) has developed a set of MATLAB tools to process and convert data collected by Sontek Hydra instruments to netCDF, which is a format used by the USGS to process and archive oceanographic time-series data. The USGS makes high-resolution current measurements within 1.5 meters of the bottom. These data are used in combination with other instrument data from sediment transport studies to develop sediment transport models. Instrument manufacturers provide software which outputs unique binary data formats. Multiple data formats are cumbersome. The USGS solution is to translate data streams into a common data format: netCDF. The Hydratools toolbox is written to create netCDF format files following EPIC conventions, complete with embedded metadata. Data are accepted from both the ADV and the PCADP. The toolbox will detect and remove bad data, substitute other sources of heading and tilt measurements if necessary, apply ambiguity corrections, calculate statistics, return information about data quality, and organize metadata. Standardized processing and archiving makes these data more easily and routinely accessible locally and over the Internet. In addition, documentation of the techniques used in the toolbox provides a baseline reference for others utilizing the data.

  14. Best Practices for Data Publication to Facilitate Integration into NED: A Reference Guide for Authors

    NASA Astrophysics Data System (ADS)

    Schmitz, Marion; Mazzarella, J. M.; Madore, B. F.; Ogle, P. M.; Ebert, R.; Baker, K.; Chan, H.; Chen, X.; Fadda, D.; Frayer, C.; Jacobson, J. D.; LaGue, C.; Lo, T. M.; Pevunova, O.; Terek, S.; Steer, I.

    2014-01-01

    At the urging of the NASA/IPAC Extragalactic Database (NED) Users Committee, the NED Team has prepared and published on its website a new document titled "Best Practices for Data Publication to Facilitate Integration into NED: A Reference Guide for Authors" (http://ned.ipac.caltech.edu/docs/BPDP/NED_BPDP.pdf). We hope that journal publishers will incorporate links to this living document in their Instructions to Authors to provide a practical reference for authors, referees, and science editors so as to help avoid various pitfalls that often impede the interpretation of data and metadata, and also delay their integration into NED, SIMBAD, ADS and other systems. In particular, we discuss the importance of using proper naming conventions, providing the epoch and system of coordinates, including units and uncertainties, and giving sufficient metadata for the unambiguous interpretation of tabular, imaging, and spectral data. The biggest impediments to the assimilation of new data from the literature into NED are ambiguous object names and non-unique, coordinate-based identifiers. A Checklist of Recommendations will be presented which includes links to sections of the Best Practices document that provide further examples, explanation, and rationale.

  15. Visualization of JPEG Metadata

    NASA Astrophysics Data System (ADS)

    Malik Mohamad, Kamaruddin; Deris, Mustafa Mat

    There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.

  16. Use of a metadata documentation and search tool for large data volumes: The NGEE arctic example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Hook, Leslie A; Killeffer, Terri S

    The Online Metadata Editor (OME) is a web-based tool to help document scientific data in a well-structured, popular scientific metadata format. In this paper, we will discuss the newest tool that Oak Ridge National Laboratory (ORNL) has developed to generate, edit, and manage metadata and how it is helping data-intensive science centers and projects, such as the U.S. Department of Energy s Next Generation Ecosystem Experiments (NGEE) in the Arctic to prepare metadata and make their big data produce big science and lead to new discoveries.

  17. Creating preservation metadata from XML-metadata profiles

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Bertelmann, Roland; Gebauer, Petra; Hasler, Tim; Klump, Jens; Kirchner, Ingo; Peters-Kottig, Wolfgang; Mettig, Nora; Rusch, Beate

    2014-05-01

    Registration of dataset DOIs at DataCite makes research data citable and comes with the obligation to keep data accessible in the future. In addition, many universities and research institutions measure data that is unique and not repeatable like the data produced by an observational network and they want to keep these data for future generations. In consequence, such data should be ingested in preservation systems, that automatically care for file format changes. Open source preservation software that is developed along the definitions of the ISO OAIS reference model is available but during ingest of data and metadata there are still problems to be solved. File format validation is difficult, because format validators are not only remarkably slow - due to variety in file formats different validators return conflicting identification profiles for identical data. These conflicts are hard to resolve. Preservation systems have a deficit in the support of custom metadata. Furthermore, data producers are sometimes not aware that quality metadata is a key issue for the re-use of data. In the project EWIG an university institute and a research institute work together with Zuse-Institute Berlin, that is acting as an infrastructure facility, to generate exemplary workflows for research data into OAIS compliant archives with emphasis on the geosciences. The Institute for Meteorology provides timeseries data from an urban monitoring network whereas GFZ Potsdam delivers file based data from research projects. To identify problems in existing preservation workflows the technical work is complemented by interviews with data practitioners. Policies for handling data and metadata are developed. Furthermore, university teaching material is created to raise the future scientists awareness of research data management. As a testbed for ingest workflows the digital preservation system Archivematica [1] is used. During the ingest process metadata is generated that is compliant to the Metadata Encoding and Transmission Standard (METS). To find datasets in future portals and to make use of this data in own scientific work, proper selection of discovery metadata and application metadata is very important. Some XML-metadata profiles are not suitable for preservation, because version changes are very fast and make it nearly impossible to automate the migration. For other XML-metadata profiles schema definitions are changed after publication of the profile or the schema definitions become inaccessible, which might cause problems during validation of the metadata inside the preservation system [2]. Some metadata profiles are not used widely enough and might not even exist in the future. Eventually, discovery and application metadata have to be embedded into the mdWrap-subtree of the METS-XML. [1] http://www.archivematica.org [2] http://dx.doi.org/10.2218/ijdc.v7i1.215

  18. Reducing supply chain energy use in next-generation vehicle lightweighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanes, Rebecca J.; Das, Sujit; Carpenter, Alberta

    Vehicle lightweighting reduces the amount of fuel consumed in a vehicle's use phase, but depending on what lightweight materials replace the conventional materials, and in what amounts, the manufacturing energy may increase or decrease. For carbon fiber reinforced polymer (CFRP), a next-generation lightweighting material, the increase in vehicle manufacturing energy is greater than the fuel savings, resulting in a net increase in energy consumption over a vehicle's manufacturing and use relative to a standard non-lightweighted car. [1] This work explores ways to reduce the supply chain energy of CFRP lightweighted vehicles through alternative production technologies and energy efficiency improvements. Themore » objective is to determine if CFRP can offer energy savings comparable to or greater than aluminum, a conventional lightweighting material. Results of this analysis can be used to inform additional research and development efforts in CFRP production and future directions in lightweight vehicle production. The CFRP supply chain is modeled using the Material Flows through Industry (MFI) scenario modeling tool, which calculates 'mine to materials' energy consumption, material inventories and greenhouse gas emissions for industrial supply chains. In this analysis, the MFI tool is used to model the supply chains of two lightweighted vehicles, an aluminum intensive vehicle (AIV) and a carbon fiber intensive vehicle (CFV), under several manufacturing scenarios. Vehicle specifications are given in [1]. Scenarios investigated cover alternative carbon fiber (CF) feedstocks and energy efficiency improvements at various points in the vehicle supply chains. The alternative CF feedstocks are polyacrylonitrile, lignin and petroleum-derived mesophase pitch. Scenarios in which the energy efficiency of CF and CFRP production increases are explored using sector efficiency potential values, which quantify the reduction in energy consumption achievable when process equipment is upgraded to the most efficient available. Preliminary analyses indicate that producing CF from lignin instead of polyacrylonitrile, the most commonly used feedstock, reduces energy consumption in the CFRP supply chain by 7.5%, and that implementing energy efficient process equipment produces an additional 8% reduction. Final results will show if these potential reductions are sufficient to make the CFV energy savings comparable with AIV energy savings. [1] Das, S., Graziano, D., Upadhyayula, V. K., Masanet, E., Riddle, M., & Cresko, J. (2016). Vehicle lightweighting energy use impacts in US light-duty vehicle fleet. Sustainable Materials and Technologies, 8, 5-13.« less

  19. Assisted editing od SensorML with EDI. A bottom-up scenario towards the definition of sensor profiles.

    NASA Astrophysics Data System (ADS)

    Oggioni, Alessandro; Tagliolato, Paolo; Fugazza, Cristiano; Bastianini, Mauro; Pavesi, Fabio; Pepe, Monica; Menegon, Stefano; Basoni, Anna; Carrara, Paola

    2015-04-01

    Sensor observation systems for environmental data have become increasingly important in the last years. The EGU's Informatics in Oceanography and Ocean Science track stressed the importance of management tools and solutions for marine infrastructures. We think that full interoperability among sensor systems is still an open issue and that the solution to this involves providing appropriate metadata. Several open source applications implement the SWE specification and, particularly, the Sensor Observation Services (SOS) standard. These applications allow for the exchange of data and metadata in XML format between computer systems. However, there is a lack of metadata editing tools supporting end users in this activity. Generally speaking, it is hard for users to provide sensor metadata in the SensorML format without dedicated tools. In particular, such a tool should ease metadata editing by providing, for standard sensors, all the invariant information to be included in sensor metadata, thus allowing the user to concentrate on the metadata items that are related to the specific deployment. RITMARE, the Italian flagship project on marine research, envisages a subproject, SP7, for the set-up of the project's spatial data infrastructure. SP7 developed EDI, a general purpose, template-driven metadata editor that is composed of a backend web service and an HTML5/javascript client. EDI can be customized for managing the creation of generic metadata encoded as XML. Once tailored to a specific metadata format, EDI presents the users a web form with advanced auto completion and validation capabilities. In the case of sensor metadata (SensorML versions 1.0.1 and 2.0), the EDI client is instructed to send an "insert sensor" request to an SOS endpoint in order to save the metadata in an SOS server. In the first phase of project RITMARE, EDI has been used to simplify the creation from scratch of SensorML metadata by the involved researchers and data managers. An interesting by-product of this ongoing work is currently constituting an archive of predefined sensor descriptions. This information is being collected in order to further ease metadata creation in the next phase of the project. Users will be able to choose among a number of sensor and sensor platform prototypes: These will be specific instances on which it will be possible to define, in a bottom-up approach, "sensor profiles". We report on the outcome of this activity.

  20. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an OpenSearch GUI that is pre-configured to search for granules within the dataset. We will present our experiences with converting NASA metadata into LOD, discuss the challenges, illustrate some of the enabled mashups, and demonstrate the latest version of the "instant browse" interface for navigating multiple metadata collections.

  1. Evolution of cystic fibrosis lung function in the early years.

    PubMed

    Bush, Andrew; Sly, Peter D

    2015-11-01

    Most treatment of newborn screening-diagnosed cystic fibrosis is not evidence-based; there are very few randomized controlled trials (RCTs). Furthermore, the advent of novel molecular therapies, which could be started at diagnosis, mandates performing RCTs in very young children. However, unless the natural history of early cystic fibrosis lung disease is known, RCTs are impossible. Here, we review the results of two large prospective cohorts of these infants - London Cystic Fibrosis Collaboration (LCFC) (London, UK) and Australian Respiratory Early Surveillance Team for Cystic Fibrosis (AREST-CF) (Australia). Nutritional status remained excellent in both the cohorts. Both cohorts reported abnormal lung function aged at 3 months. AREST-CF, which previously reported rapidly declining preschool lung function, now report good conventional school-age spirometry. LCFC reported improvement between 3 months and 1 year, and stability in the second year. AREST-CF also reported a high prevalence of high resolution computed tomographic abnormalities related to free neutrophil elastase in bronchoalveolar lavage; LCFC reported high resolution computed tomographic changes at 1 year, which were too mild to be scored reproducibly. At least in the first 2 years of life, lung function is not a good end-point for RCTs; routine bronchoalveolar lavage and HRCT cannot be justified. Newborn screening has greatly improved outcomes, but we need better point-of-care biomarkers.

  2. The PDS4 Metadata Management System

    NASA Astrophysics Data System (ADS)

    Raugh, A. C.; Hughes, J. S.

    2018-04-01

    We present the key features of the Planetary Data System (PDS) PDS4 Information Model as an extendable metadata management system for planetary metadata related to data structure, analysis/interpretation, and provenance.

  3. Acute and Short-Term Toxicities of Conventionally Fractionated Versus Hypofractionated Whole Breast Irradiation in a Prospective, Randomized Trial

    PubMed Central

    Shaitelman, Simona F.; Schlembach, Pamela J.; Arzu, Isidora; Ballo, Matthew; Bloom, Elizabeth S.; Buchholz, Daniel; Chronowski, Gregory M.; Dvorak, Tomas; Grade, Emily; Hoffman, Karen E.; Kelly, Patrick; Ludwig, Michelle; Perkins, George H.; Reed, Valerie; Shah, Shalin; Stauder, Michael C.; Strom, Eric A.; Tereffe, Welela; Woodward, Wendy A.; Ensor, Joe; Baumann, Donald; Thompson, Alastair M.; Amaya, Diana; Davis, Tanisha; Guerra, William; Hamblin, Lois; Hortobagyi, Gabriel; Hunt, Kelly K.; Buchholz, Thomas A.; Smith, Benjamin D.

    2015-01-01

    IMPORTANCE The most appropriate dose-fractionation for whole breast irradiation (WBI) remains uncertain. OBJECTIVE To assess acute and six-month toxicity and quality of life (QoL) with conventionally fractionated WBI (CF-WBI) versus hypofractionated WBI (HF-WBI). DESIGN Unblinded randomized trial of CF-WBI (n=149; 50 Gy/25 fractions + boost [10–14 Gy/5–7 fractions]) versus HF-WBI (n=138; 42.56 Gy/16 fractions + boost [10–12.5 Gy/4–5 fractions]). SETTING Community-based and academic cancer centers. PARTICIPANTS 287 women age ≥ 40 years with stage 0–II breast cancer treated with breast-conserving surgery for whom whole breast irradiation without addition of a third field was recommended. 76% (n=217) were overweight or obese. Patients were enrolled from February 2011 through February 2014. INTERVENTION(S) FOR CLINICAL TRIALS CF-WBI versus HF-WBI. MAIN OUTCOME MEASURES Physician-reported acute and six-month toxicities using NCICTCv4.0 and patient-reported QoL using the FACT-B version 4. All analyses were intention-to-treat, with outcomes compared using chi-square, Cochran-Armitage test, and ordinal logistic regression. Patients were followed for a minimum of 6 months. RESULTS Treatment arms were well-matched for baseline characteristics including FACT-B total score (P=0.46) and individual QoL items such as lack of energy (P=0.86) and trouble meeting family needs (P=0.54). Maximal physician-reported acute dermatitis (P<0.001), pruritus (P<0.001), breast pain (P=0.001), hyperpigmentation (P=0.002), and fatigue (P=0.02) during radiation were lower in patients randomized to HF-WBI. Overall grade ≥2 acute toxicity was less with HF-WBI vs. CF-WBI (47% vs. 78%; P<0.001). Six months after radiation, physicians reported less fatigue in patients randomized to HF-WBI (P=0.01), and patients randomized to HF-WBI reported less lack of energy (P<0.001) and less trouble meeting family needs (P=0.01). Multivariable regression confirmed the superiority of HF-WBI in terms of patient-reported lack of energy (OR 0.39, 95% CI 0.24–0.63) and trouble meeting family needs (OR 0.34, 95% CI 0.16–0.75). CONCLUSIONS AND RELEVANCE HF-WBI appears to yield less acute toxicity than CF-WBI, as well as less fatigue and trouble meeting family needs six months after completing radiation. These findings should be communicated to patients as part of shared decision-making. TRIAL REGISTRATION NCT01266642 (https://clinicaltrials.gov/ct2/show/NCT01266642) PMID:26247543

  4. Enabling Interoperability and Servicing Multiple User Segments Through Web Services, Standards, and Data Tools

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giriprakash; Wilson, Bruce E.; Cook, Robert B.; Lenhardt, Chris W.; Santhana Vannan, Suresh; Pan, Jerry; McMurry, Ben F.; Devarakonda, Ranjeet

    2010-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) is one of the science-oriented data centers in EOSDIS, aligned primarily with terrestrial ecology. The ORNL DAAC archives and serves data from NASA-funded field campaigns (such as BOREAS, FIFE, and LBA), regional and global data sets relevant to biogeochemical cycles, land validation studies for remote sensing, and source code for some terrestrial ecology models. Users of the ORNL DAAC include field ecologists, remote sensing scientists, modelers at various scales, synthesis scientific groups, a range of educational users (particularly baccalaureate and graduate instruction), and decision support analysts. It is clear that the wide range of users served by the ORNL DAAC have differing needs and differing capabilities for accessing and using data. It is also not possible for the ORNL DAAC, or the other data centers in EDSS to develop all of the tools and interfaces to support even most of the potential uses of data directly. As is typical of Information Technology to support a research enterprise, the user needs will continue to evolve rapidly over time and users themselves cannot predict future needs, as those needs depend on the results of current investigation. The ORNL DAAC is addressing these needs by targeted implementation of web services and tools which can be consumed by other applications, so that a modeler can retrieve data in netCDF format with the Climate Forecasting convention and a field ecologist can retrieve subsets of that same data in a comma separated value format, suitable for use in Excel or R. Tools such as our MODIS Subsetting capability, the Spatial Data Access Tool (SDAT; based on OGC web services), and OPeNDAP-compliant servers such as THREDDS particularly enable such diverse means of access. We also seek interoperability of metadata, recognizing that terrestrial ecology is a field where there are a very large number of relevant data repositories. ORNL DAAC metadata is published to several metadata repositories using the Open Archive Initiative Protocol for Metadata Handling (OAI-PMH), to increase the chances that users can find data holdings relevant to their particular scientific problem. ORNL also seeks to leverage technology across these various data projects and encourage standardization of processes and technical architecture. This standardization is behind current efforts involving the use of Drupal and Fedora Commons. This poster describes the current and planned approaches that the ORNL DAAC is taking to enable cost-effective interoperability among data centers, both across the NASA EOSDIS data centers and across the international spectrum of terrestrial ecology-related data centers. The poster will highlight the standards that we are currently using across data formats, metadata formats, and data protocols. References: [1]Devarakonda R., et al. Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics (2010), 3(1): 87-94. [2]Devarakonda R., et al. Data sharing and retrieval using OAI-PMH. Earth Science Informatics (2011), 4(1): 1-5.

  5. Design of Community Resource Inventories as a Component of Scalable Earth Science Infrastructure: Experience of the Earthcube CINERGI Project

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Valentine, D. W., Jr.; Grethe, J. S.; Hsu, L.; Malik, T.; Bermudez, L. E.; Gupta, A.; Lehnert, K. A.; Whitenack, T.; Ozyurt, I. B.; Condit, C.; Calderon, R.; Musil, L.

    2014-12-01

    EarthCube is envisioned as a cyberinfrastructure that fosters new, transformational geoscience by enabling sharing, understanding and scientifically-sound and efficient re-use of formerly unconnected data resources, software, models, repositories, and computational power. Its purpose is to enable science enterprise and workforce development via an extensible and adaptable collaboration and resource integration framework. A key component of this vision is development of comprehensive inventories supporting resource discovery and re-use across geoscience domains. The goal of the EarthCube CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) project is to create a methodology and assemble a large inventory of high-quality information resources with standard metadata descriptions and traceable provenance. The inventory is compiled from metadata catalogs maintained by geoscience data facilities, as well as from user contributions. The latter mechanism relies on community resource viewers: online applications that support update and curation of metadata records. Once harvested into CINERGI, metadata records from domain catalogs and community resource viewers are loaded into a staging database implemented in MongoDB, and validated for compliance with ISO 19139 metadata schema. Several types of metadata defects detected by the validation engine are automatically corrected with help of several information extractors or flagged for manual curation. The metadata harvesting, validation and processing components generate provenance statements using W3C PROV notation, which are stored in a Neo4J database. Thus curated metadata, along with the provenance information, is re-published and accessed programmatically and via a CINERGI online application. This presentation focuses on the role of resource inventories in a scalable and adaptable information infrastructure, and on the CINERGI metadata pipeline and its implementation challenges. Key project components are described at the project's website (http://workspace.earthcube.org/cinergi), which also provides access to the initial resource inventory, the inventory metadata model, metadata entry forms and a collection of the community resource viewers.

  6. Organic rankine cycle fluid

    DOEpatents

    Brasz, Joost J.; Jonsson, Ulf J.

    2006-09-05

    A method of operating an organic rankine cycle system wherein a liquid refrigerant is circulated to an evaporator where heat is introduced to the refrigerant to convert it to vapor. The vapor is then passed through a turbine, with the resulting cooled vapor then passing through a condenser for condensing the vapor to a liquid. The refrigerant is one of CF.sub.3CF.sub.2C(O)CF(CF.sub.3).sub.2, (CF.sub.3).sub.2 CFC(O)CF(CF.sub.3).sub.2, CF.sub.3(CF.sub.2).sub.2C(O)CF(CF.sub.3).sub.2, CF.sub.3(CF.sub.2).sub.3C(O)CF(CG.sub.3).sub.2, CF.sub.3(CF.sub.2).sub.5C(O)CF.sub.3, CF.sub.3CF.sub.2C(O)CF.sub.2CF.sub.2CF.sub.3, CF.sub.3C(O)CF(CF.sub.3).sub.2.

  7. Metadata improvements driving new tools and services at a NASA data center

    NASA Astrophysics Data System (ADS)

    Moroni, D. F.; Hausman, J.; Foti, G.; Armstrong, E. M.

    2011-12-01

    The NASA Physical Oceanography DAAC (PO.DAAC) is responsible for distributing and maintaining satellite derived oceanographic data from a number of NASA and non-NASA missions for the physical disciplines of ocean winds, sea surface temperature, ocean topography and gravity. Currently its holdings consist of over 600 datasets with a data archive in excess of 200 Terrabytes. The PO.DAAC has recently embarked on a metadata quality and completeness project to migrate, update and improve metadata records for over 300 public datasets. An interactive database management tool has been developed to allow data scientists to enter, update and maintain metadata records. This tool communicates directly with PO.DAAC's Data Management and Archiving System (DMAS), which serves as the new archival and distribution backbone as well as a permanent repository of dataset and granule-level metadata. Although we will briefly discuss the tool, more important ramifications are the ability to now expose, propagate and leverage the metadata in a number of ways. First, the metadata are exposed directly through a faceted and free text search interface directly from drupal-based PO.DAAC web pages allowing for quick browsing and data discovery especially by "drilling" through the various facet levels that organize datasets by time/space resolution, processing level, sensor, measurement type etc. Furthermore, the metadata can now be exposed through web services to produce metadata records in a number of different formats such as FGDC and ISO 19115, or potentially propagated to visualization and subsetting tools, and other discovery interfaces. The fundamental concept is that the metadata forms the essential bridge between the user, and the tool or discovery mechanism for a broad range of ocean earth science data records.

  8. VAPOR PRESSURES, LIQUID MOLAR VOLUMES, VAPOR NON- IDEALITY, AND CRITICAL PROPERTIES OF CF3OCF2CF2CF3, c-CF2CF2CF2CF2O, CF3OCF2OCF3, AND CF3OCF2CF2H

    EPA Science Inventory

    New measurements of the thermophysical properties of CF3OCF2CF2CF3 and c -CF2CF2CF2CF2O are reported from T ≈ 235 K to the critical region. Liquid-phase volumetric results for CF3OCF2OCF3 and CF3OCF2CF2H (235 < T/K < 303) are reported to supplement the information already availab...

  9. EPA Metadata Style Guide Keywords and EPA Organization Names

    EPA Pesticide Factsheets

    The following keywords and EPA organization names listed below, along with EPA’s Metadata Style Guide, are intended to provide suggestions and guidance to assist with the standardization of metadata records.

  10. Interpreting the ASTM 'content standard for digital geospatial metadata'

    USGS Publications Warehouse

    Nebert, Douglas D.

    1996-01-01

    ASTM and the Federal Geographic Data Committee have developed a content standard for spatial metadata to facilitate documentation, discovery, and retrieval of digital spatial data using vendor-independent terminology. Spatial metadata elements are identifiable quality and content characteristics of a data set that can be tied to a geographic location or area. Several Office of Management and Budget Circulars and initiatives have been issued that specify improved cataloguing of and accessibility to federal data holdings. An Executive Order further requires the use of the metadata content standard to document digital spatial data sets. Collection and reporting of spatial metadata for field investigations performed for the federal government is an anticipated requirement. This paper provides an overview of the draft spatial metadata content standard and a description of how the standard could be applied to investigations collecting spatially-referenced field data.

  11. A System for Automated Extraction of Metadata from Scanned Documents using Layout Recognition and String Pattern Search Models

    PubMed Central

    Misra, Dharitri; Chen, Siyuan; Thoma, George R.

    2010-01-01

    One of the most expensive aspects of archiving digital documents is the manual acquisition of context-sensitive metadata useful for the subsequent discovery of, and access to, the archived items. For certain types of textual documents, such as journal articles, pamphlets, official government records, etc., where the metadata is contained within the body of the documents, a cost effective method is to identify and extract the metadata in an automated way, applying machine learning and string pattern search techniques. At the U. S. National Library of Medicine (NLM) we have developed an automated metadata extraction (AME) system that employs layout classification and recognition models with a metadata pattern search model for a text corpus with structured or semi-structured information. A combination of Support Vector Machine and Hidden Markov Model is used to create the layout recognition models from a training set of the corpus, following which a rule-based metadata search model is used to extract the embedded metadata by analyzing the string patterns within and surrounding each field in the recognized layouts. In this paper, we describe the design of our AME system, with focus on the metadata search model. We present the extraction results for a historic collection from the Food and Drug Administration, and outline how the system may be adapted for similar collections. Finally, we discuss some ongoing enhancements to our AME system. PMID:21179386

  12. The Arctic Observing Network (AON)Cooperative Arctic Data and Information Service (CADIS)

    NASA Astrophysics Data System (ADS)

    Moore, J.; Fetterer, F.; Middleton, D.; Ramamurthy, M.; Barry, R.

    2007-12-01

    The Arctic Observing Network (AON) is intended to be a federation of 34 land, atmosphere and ocean observation sites, some already operating and some newly funded by the U.S. National Science Foundation. This International Polar Year (IPY) initiative will acquire a major portion of the data coming from the interagency Study of Environmental Arctic Change (SEARCH). AON will succeed in supporting the science envisioned by its planners only if it functions as a system and not as a collection of independent observation programs. Development and implementation of a comprehensive data management strategy will key a key to the success of this effort. AON planners envision an ideal data management system that includes a portal through which scientists can submit metadata and datasets at a single location; search the complete archive and find all data relevant to a location or process; all data have browse imagery and complete documentation; time series or fields can be plotted on line, and all data are in a relational database so that multiple data sets and sources can be queried and retrieved. The Cooperative Arctic Data and Information Service (CADIS) will provide near-real-time data delivery, a long-term repository for data, a portal for data discovery, and tools to manipulate data by building on existing tools like the Unidata Integrated Data Viewer (IDV). Our approach to the data integration challenge is to start by asking investigators to provide metadata via a general purpose user interface. An entry tool assists PIs in writing metadata and submitting data. Data can be submitted to the archive in NetCDF with Climate and Forecast conventions or in one of several other standard formats where possible. CADIS is a joint effort of the University Corporation for Atmospheric Research (UCAR), the National Snow and Ice Data Center (NSIDC), and the National Center for Atmospheric Research (NCAR). In the first year, we are concentrating on establishing metadata protocols that are compatible with international standards, and on demonstrating data submission, search and visualization tools with a subset of AON data. These capabilities will be expanded in years 2 and 3. By working with AON investigators and by using evolving conventions for in situ data formats as they mature, we hope to bring CADIS to the full level of data integration imagined by AON planners. The CADIS development will be described in terms of challenges, implementation strategies and progress to date. The developers are making a conscious effort to integrate this system and its data holdings with the complementary efforts in the SEARCH and IPY programs. The interdisciplinary content of the data, the variations in format and documentation, as well as its geographic coverage across the Arctic Basin all impact the form and effectiveness of the CADIS system architecture. The clever solutions to the complexity of implementing a comprehensive data management strategy implied in this diversity will be a focus of the presentation.

  13. Comparison of thin-film resistance heat-transfer gages with thin-skin transient calorimeter gages in conventional hypersonic wind tunnels

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III

    1981-01-01

    Thin film gages deposited at the stagnation region of small (8.1-mm-diameter) hemispheres and gages mounted flush with the surface of a sharp-leading-edge flat plate were tested in the Langley continuous-flow hypersonic tunnel and in the Langley hypersonic CF4 tunnel. Two substrate materials were tested, quartz and a machinable glass-ceramic. Small hemispheres were also tested utilizing the thin-skin transient calorimeter technique usually employed in conventional tunnels. One transient calorimeter model was a thin shell of stainless steel, and the other was a thin-skin insert of stainless steel mounted into a hemisphere fabricated from a machinable-glass-ceramic. Measured heat-transfer rates from the various hemispheres were compared with one another and with predicted rates. The results demonstrate the feasibility and advantages of using-film resistance heat-transfer gages in conventional hypersonic wind tunnels over a wide range of conditions.

  14. The Importance of Metadata in System Development and IKM

    DTIC Science & Technology

    2003-02-01

    Defence R& D Canada The Importance of Metadata in System Development and IKM Anthony W. Isenor Technical Memorandum DRDC Atlantic TM 2003-011...Metadata in System Development and IKM Anthony W. Isenor Defence R& D Canada – Atlantic Technical Memorandum DRDC Atlantic TM 2003-011 February... it is important for searches and providing relevant information to the client. A comparison of metadata standards was conducted with emphasis on

  15. The Global Streamflow Indices and Metadata Archive (GSIM) - Part 1: The production of a daily streamflow archive and metadata

    NASA Astrophysics Data System (ADS)

    Do, Hong Xuan; Gudmundsson, Lukas; Leonard, Michael; Westra, Seth

    2018-04-01

    This is the first part of a two-paper series presenting the Global Streamflow Indices and Metadata archive (GSIM), a worldwide collection of metadata and indices derived from more than 35 000 daily streamflow time series. This paper focuses on the compilation of the daily streamflow time series based on 12 free-to-access streamflow databases (seven national databases and five international collections). It also describes the development of three metadata products (freely available at https://doi.pangaea.de/10.1594/PANGAEA.887477): (1) a GSIM catalogue collating basic metadata associated with each time series, (2) catchment boundaries for the contributing area of each gauge, and (3) catchment metadata extracted from 12 gridded global data products representing essential properties such as land cover type, soil type, and climate and topographic characteristics. The quality of the delineated catchment boundary is also made available and should be consulted in GSIM application. The second paper in the series then explores production and analysis of streamflow indices. Having collated an unprecedented number of stations and associated metadata, GSIM can be used to advance large-scale hydrological research and improve understanding of the global water cycle.

  16. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  17. Achieving Sub-Second Search in the CMR

    NASA Astrophysics Data System (ADS)

    Gilman, J.; Baynes, K.; Pilone, D.; Mitchell, A. E.; Murphy, K. J.

    2014-12-01

    The Common Metadata Repository (CMR) is the next generation Earth Science Metadata catalog for NASA's Earth Observing data. It joins together the holdings from the EOS Clearing House (ECHO) and the Global Change Master Directory (GCMD), creating a unified, authoritative source for EOSDIS metadata. The CMR allows ingest in many different formats while providing consistent search behavior and retrieval in any supported format. Performance is a critical component of the CMR, ensuring improved data discovery and client interactivity. The CMR delivers sub-second search performance for any of the common query conditions (including spatial) across hundreds of millions of metadata granules. It also allows the addition of new metadata concepts such as visualizations, parameter metadata, and documentation. The CMR's goals presented many challenges. This talk will describe the CMR architecture, design, and innovations that were made to achieve its goals. This includes: * Architectural features like immutability and backpressure. * Data management techniques such as caching and parallel loading that give big performance gains. * Open Source and COTS tools like Elasticsearch search engine. * Adoption of Clojure, a functional programming language for the Java Virtual Machine. * Development of a custom spatial search plugin for Elasticsearch and why it was necessary. * Introduction of a unified model for metadata that maps every supported metadata format to a consistent domain model.

  18. Syntactic and Semantic Validation without a Metadata Management System

    NASA Technical Reports Server (NTRS)

    Pollack, Janine; Gokey, Christopher D.; Kendig, David; Olsen, Lola; Wharton, Stephen W. (Technical Monitor)

    2001-01-01

    The ability to maintain quality information is essential to securing the confidence in any system for which the information serves as a data source. NASA's Global Change Master Directory (GCMD), an online Earth science data locator, holds over 9000 data set descriptions and is in a constant state of flux as metadata are created and updated on a daily basis. In such a system, the importance of maintaining the consistency and integrity of these-metadata is crucial. The GCMD has developed a metadata management system utilizing XML, controlled vocabulary, and Java technologies to ensure the metadata not only adhere to valid syntax, but also exhibit proper semantics.

  19. Data Publication Process for CMIP5 Data and the Role of PIDs within Federated Earth System Science Projects

    NASA Astrophysics Data System (ADS)

    Stockhause, M.; Höck, H.; Toussaint, F.; Weigel, T.; Lautenschlager, M.

    2012-12-01

    We present the publication process for the CMIP5 (Coupled Model Intercomparison Project Phase 5) data with special emphasis on the current role of identifiers and the potential future role of PIDs in such distributed technical infrastructures. The DataCite data publication with DOI assignment finalizes the 3 levels quality control procedure for CMIP5 data (Stockhause et al., 2012). WDCC utilizes the Assistant System Atarrabi to support the publication process. Atarrabi is a web-based workflow system for metadata reviews of data creators and Publication Agents (PAs). Within the quality checks for level 3 all available information in the different infrastructure components is cross-checked for consistency by the DataCite PA. This information includes: metadata on data, metadata in the long-term archive of the Publication Agency, quality information, and external metadata on model and simulation (CIM). For these consistency checks metadata related to the data publication has to be identified. The Data Reference Syntax (DRS) convention functions as global identifier for data. Since the DRS structures the data, hierarchically, it can be used to identify data collections like DataCite publication units, i.e. all data belonging to a CMIP5 simulation. Every technical component of the infrastructure uses DRS or maps to it, but there is no central repository storing DRS_ids. Thus they have to be mapped, occasionally. Additional local identifiers are used within the different technical infrastructure components. Identification of related pieces of information in their repositories is cumbersome and tricky for the PA. How could PIDs improve the situation? To establish a reliable distributed data and metadata infrastructure, PIDs for all objects are needed as well as relations between them. An ideal data publication scenario for federated community projects within Earth System Sciences, e.g. CMIP, would be: 1. Data creators at the modeling centers define their simulation, related metadata, and software, which are assigned PIDs. 2. During ESGF data publication the data entities are assigned PIDs with references to the PIDs of 1. Since we deal with different hierarchical levels, the definition of collections on these levels is advantageous. A possible implementation concept using Handles is described by Weigel et al. (2012). 3. Quality results are assigned PID(s) and a reference to the data. A quality PID is added as a reference to the data collection PID. 4. The PA accesses the PID on the data collection to get the data and all related information for cross-checking. The presented example of the technical infrastructure for the CMIP5 data distribution shows the importance of PIDs, especially as the data is distributed over multiple repositories world-wide and additional separate pieces of data related information are independently collected from the data. References: Stockhause, M., Höck, H., Toussaint, F., Lautenschlager, M. (2012): 'Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data', Geosci. Model Dev. Discuss., 5, 781-802, doi:10.5194/gmdd-5-781-2012. Weigel, T., et al. (2012): 'Structural Elements in a Persistent Identifier Infrastructure and Resulting Benefits for the Earth Science Community', submitted to AGU 2012 Session IN009.

  20. Definition of a CDI metadata profile and its ISO 19139 based encoding

    NASA Astrophysics Data System (ADS)

    Boldrini, Enrico; de Korte, Arjen; Santoro, Mattia; Schaap, Dick M. A.; Nativi, Stefano; Manzella, Giuseppe

    2010-05-01

    The Common Data Index (CDI) is the middleware service adopted by SeaDataNet for discovery and query. The primary goal of the EU funded project SeaDataNet is to develop a system which provides transparent access to marine data sets and data products from 36 countries in and around Europe. The European context of SeaDataNet requires that the developed system complies with European Directive INSPIRE. In order to assure the required conformity a GI-cat based solution is proposed. GI-cat is a broker service able to mediate from different metadata sources and publish them through a consistent and unified interface. In this case GI-cat is used as a front end to the SeaDataNet portal publishing the original data, based on CDI v.1 XML schema, through an ISO 19139 application profile catalog interface (OGC CSW AP ISO). The choice of ISO 19139 is supported and driven by INSPIRE Implementing Rules, that have been used as a reference through the whole development process. A mapping from the CDI data model to the ISO 19139 was hence to be implemented in GI-cat and a first draft quickly developed, as both CDI v.1 and ISO 19139 happen to be XML implementations based on the same abstract data model (standard ISO 19115 - metadata about geographic information). This first draft mapping pointed out the CDI metadata model differences with respect to ISO 19115, as it was not possible to accommodate all the information contained in CDI v.1 into ISO 19139. Moreover some modifications were needed in order to reach INSPIRE compliance. The consequent work consisted in the definition of the CDI metadata model as a profile of ISO 19115. This included checking of all the metadata elements present in CDI and their cardinality. A comparison was made with respect to ISO 19115 and possible extensions were individuated. ISO 19139 was then chosen as a natural XML implementation of this new CDI metadata profile. The mapping and the profile definition processes were iteratively refined leading up to a complete mapping from the CDI data model to ISO 19139. Several issues were faced during the definition process. Among these: dynamic lists and vocabularies used by SeaDataNet could not be easily accommodated in ISO 19139, time resolution information from CDI v.1 was also difficult to accommodate, ambiguities both in the ISO 19139 specification and in the INSPIRE regulations (e.g. regarding to the bounding polygon, the language and the role of the responsible party). Another outcome of this process is the set up of conventions regarding the protocol formats to be used for a useful machine to machine data access. Changes to the original ISO 19139 schema were at the maximum extent avoided because of practical reasons within SeaDataNet: additional constraint required by the profile have been defined and will be checked by the use of Schematron or other validation mechanisms. The achieved mapping was finally ready to be integrated in GI-cat by implementation of a new accessor component for CDI. These type of components play the role of data model mediators within GI-cat framework. The new defined profile and its implementation will also be used within SeaDataNet as a replacement of the current data model implementation (CDI v.1).

  1. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    NASA Astrophysics Data System (ADS)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  2. Survey data and metadata modelling using document-oriented NoSQL

    NASA Astrophysics Data System (ADS)

    Rahmatuti Maghfiroh, Lutfi; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Survey data that are collected from year to year have metadata change. However it need to be stored integratedly to get statistical data faster and easier. Data warehouse (DW) can be used to solve this limitation. However there is a change of variables in every period that can not be accommodated by DW. Traditional DW can not handle variable change via Slowly Changing Dimension (SCD). Previous research handle the change of variables in DW to manage metadata by using multiversion DW (MVDW). MVDW is designed using relational model. Some researches also found that developing nonrelational model in NoSQL database has reading time faster than the relational model. Therefore, we propose changes to metadata management by using NoSQL. This study proposes a model DW to manage change and algorithms to retrieve data with metadata changes. Evaluation of the proposed models and algorithms result in that database with the proposed design can retrieve data with metadata changes properly. This paper has contribution in comprehensive data analysis with metadata changes (especially data survey) in integrated storage.

  3. Improving the accessibility and re-use of environmental models through provision of model metadata - a scoping study

    NASA Astrophysics Data System (ADS)

    Riddick, Andrew; Hughes, Andrew; Harpham, Quillon; Royse, Katherine; Singh, Anubha

    2014-05-01

    There has been an increasing interest both from academic and commercial organisations over recent years in developing hydrologic and other environmental models in response to some of the major challenges facing the environment, for example environmental change and its effects and ensuring water resource security. This has resulted in a significant investment in modelling by many organisations both in terms of financial resources and intellectual capital. To capitalise on the effort on producing models, then it is necessary for the models to be both discoverable and appropriately described. If this is not undertaken then the effort in producing the models will be wasted. However, whilst there are some recognised metadata standards relating to datasets these may not completely address the needs of modellers regarding input data for example. Also there appears to be a lack of metadata schemes configured to encourage the discovery and re-use of the models themselves. The lack of an established standard for model metadata is considered to be a factor inhibiting the more widespread use of environmental models particularly the use of linked model compositions which fuse together hydrologic models with models from other environmental disciplines. This poster presents the results of a Natural Environment Research Council (NERC) funded scoping study to understand the requirements of modellers and other end users for metadata about data and models. A user consultation exercise using an on-line questionnaire has been undertaken to capture the views of a wide spectrum of stakeholders on how they are currently managing metadata for modelling. This has provided a strong confirmation of our original supposition that there is a lack of systems and facilities to capture metadata about models. A number of specific gaps in current provision for data and model metadata were also identified, including a need for a standard means to record detailed information about the modelling environment and the model code used, to assist the selection of models for linked compositions. Existing best practice, including the use of current metadata standards (e.g. ISO 19110, ISO 19115 and ISO 19119) and the metadata components of WaterML were also evaluated. In addition to commonly used metadata attributes (e.g. spatial reference information) there was significant interest in recording a variety of additional metadata attributes. These included more detailed information about temporal data, and also providing estimates of data accuracy and uncertainty within metadata. This poster describes the key results of this study, including a number of gaps in the provision of metadata for modelling, and outlines how these might be addressed. Overall the scoping study has highlighted significant interest in addressing this issue within the environmental modelling community. There is therefore an impetus for on-going research, and we are seeking to take this forward through collaboration with other interested organisations. Progress towards an internationally recognised model metadata standard is suggested.

  4. Analysis of Low Bidding and Change Order Rates for Navy Facilities Construction Contracts.

    DTIC Science & Technology

    1984-06-01

    examine his motives and strategies prior to bidding. Several measures of " level cf competitiveness" are introduced from bidding theory literature that...bidders of fixed-price Government construction contracts have on contract prices when the level FORM, 1473 EDITION OF INOV 6 o IS OBSOLETE S N 0 102...conventional measures of the . level of competition intensity are applied in regression and variance analyses. en, z e ., . , 144 , UNCLASSIFIED 2 SgCURITlY

  5. Special Operations Forces and Conventional Forces: Integration, Interoperability, and Interdependence

    DTIC Science & Technology

    2016-12-07

    Phasing Model8 WESBROCK, HARNED, AND PLOUS 88 | FEATURES PRISM 6, no. 3 views on how to design, plan, and execute operat ions and campaigns. The...population-centric” operational environment. SOF views campaign design dif- ferently from the six-phase model in joint doc- trine depicted above.9...U.S. interests. This difference between SOF and CF views of cam- paigning can hamper integration from the start of an operation if components of the

  6. Simple and robust diagnosis of early, small and AFP-negative primary hepatic carcinomas: an integrative approach of serum fluorescence and conventional blood tests.

    PubMed

    Wang, Ting; Zhang, Kun-He; Hu, Piao-Ping; Huang, Zeng-Yong; Zhang, Pan; Wan, Qin-Si; Huang, De-Qiang; Lv, Nong-Hua

    2016-09-27

    The diagnosis of early, small and alpha-fetoprotein (AFP)-negative primary hepatic carcinomas (PHCs) remains a significant challenge. We developed a simple and robust approach to noninvasively detect these PHCs. A rapid, high-throughput and single-tube method was firstly developed to measure serum autofluorescence and cell-free DNA (cfDNA)-related fluorescence using a real-time PCR system, and both types of serum fluorescence were measured and routine laboratory data were collected in 1229 subjects, including 353 PHC patients, 331 liver cirrhosis (LC) patients, 213 chronic hepatitis (CH) patients and 332 normal controls (NC). The results showed that fluorescence indicators of PHC differed from those of NC, CH and LC to various extents, and all of them were not associated with age, gender, or AFP level. The logistic regression models established with the fluorescence indicators alone and combined with AFP, hepatic function tests and blood cell analyses were valuable for distinguishing early, small, AFP-negative and all PHC from LC, CH, NC and all non-PHC, with areas under the receiver operating characteristic curves 0.857-0.993 and diagnostic accuracies 80.2-97.7%. Conclusively, serum autofluorescence and cfDNA-related fluorescence are able to be rapidly and simultaneously measured by our simple method and valuable for diagnosing early, small and AFP-negative PHCs, especially integrating with AFP and conventional blood tests.

  7. Storage change in a flat-lying fracture during well tests

    NASA Astrophysics Data System (ADS)

    Murdoch, Lawrence C.; Germanovich, Leonid N.

    2012-12-01

    The volume of water released from storage per unit head drop per volume of an REV is a basic quantity in groundwater hydrology, but the details of the process of storage change in the vicinity of a well are commonly overlooked. We characterize storage change in a flat-lying fracture or thin sedimentary bed through the apparent hydraulic compliance,Cf, the change in aperture of the fracture or thickness of the layer per unit change in pressure. The results of theoretical analyses and field measurements show that Cf increases with time near the well during pumping, but it drops suddenly and may become negative at the beginning of recovery during a well test. Profiles of Cfincrease with radial distance from a well, but they are marked by a sharp increase and a sharp decrease at the edge of the region affected by the wellbore pressure transient. The conventional view in groundwater hydrology is that storage change at a point is proportional to the local change in pressure, which requires that the hydraulic compliance is uniform and constant. It appears that this conventional view is a simplification of a process that varies in both space and time and can even take on negative values. This simplification may be a source of uncertainty when interpreting well tests and extensometer records or predicting long-term well performance.

  8. Descriptive Metadata: Emerging Standards.

    ERIC Educational Resources Information Center

    Ahronheim, Judith R.

    1998-01-01

    Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…

  9. Automated Metadata Extraction

    DTIC Science & Technology

    2008-06-01

    provides a means for file owners to add metadata which can then be used by iTunes for cataloging and searching [4]. Metadata can be stored in different...based and contain AAC data formats [3]. Specifically, Apple uses Protected AAC to encode copy-protected music titles purchased from the iTunes Music...Store [4]. The files purchased from the iTunes Music Store include the following metadata. • Name • Email address of purchaser • Year • Album

  10. A Solution to Metadata: Using XML Transformations to Automate Metadata

    DTIC Science & Technology

    2010-06-01

    developed their own metadata standards—Directory Interchange Format (DIF), Ecological Metadata Language ( EML ), and International Organization for...mented all their data using the EML standard. However, when later attempting to publish to a data clearinghouse— such as the Geospatial One-Stop (GOS...construct calls to its transform(s) method by providing the type of the incoming content (e.g., eml ), the type of the resulting content (e.g., fgdc) and

  11. The Department of Defense Net-Centric Data Strategy: Implementation Requires a Joint Community of Interest (COI) Working Group and Joint COI Oversight Council

    DTIC Science & Technology

    2007-05-17

    metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and usable to users...and provides the metadata formats, metadata repositories, enterprise portals and federated search engines that make data visible, available, and...develop an enterprise- wide data sharing plan, establishment of mission area governance processes for CIOs, DISA development of federated search specifications

  12. FRAMES Metadata Reporting Templates for Ecohydrological Observations, version 1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, Danielle; Varadharajan, Charuleka; Christoffersen, Brad

    FRAMES is a a set of Excel metadata files and package-level descriptive metadata that are designed to facilitate and improve capture of desired metadata for ecohydrological observations. The metadata are bundled with data files into a data package and submitted to a data repository (e.g. the NGEE Tropics Data Repository) via a web form. FRAMES standardizes reporting of diverse ecohydrological and biogeochemical data for synthesis across a range of spatiotemporal scales and incorporates many best data science practices. This version of FRAMES supports observations for primarily automated measurements collected by permanently located sensors, including sap flow (tree water use), leafmore » surface temperature, soil water content, dendrometry (stem diameter growth increment), and solar radiation. Version 1.1 extend the controlled vocabulary and incorporates functionality to facilitate programmatic use of data and FRAMES metadata (R code available at NGEE Tropics Data Repository).« less

  13. Assessing Public Metabolomics Metadata, Towards Improving Quality.

    PubMed

    Ferreira, João D; Inácio, Bruno; Salek, Reza M; Couto, Francisco M

    2017-12-13

    Public resources need to be appropriately annotated with metadata in order to make them discoverable, reproducible and traceable, further enabling them to be interoperable or integrated with other datasets. While data-sharing policies exist to promote the annotation process by data owners, these guidelines are still largely ignored. In this manuscript, we analyse automatic measures of metadata quality, and suggest their application as a mean to encourage data owners to increase the metadata quality of their resources and submissions, thereby contributing to higher quality data, improved data sharing, and the overall accountability of scientific publications. We analyse these metadata quality measures in the context of a real-world repository of metabolomics data (i.e. MetaboLights), including a manual validation of the measures, and an analysis of their evolution over time. Our findings suggest that the proposed measures can be used to mimic a manual assessment of metadata quality.

  14. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal.

    PubMed

    Baker, Ed

    2013-01-01

    Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping.

  15. Collection Metadata Solutions for Digital Library Applications

    NASA Technical Reports Server (NTRS)

    Hill, Linda L.; Janee, Greg; Dolin, Ron; Frew, James; Larsgaard, Mary

    1999-01-01

    Within a digital library, collections may range from an ad hoc set of objects that serve a temporary purpose to established library collections intended to persist through time. The objects in these collections vary widely, from library and data center holdings to pointers to real-world objects, such as geographic places, and the various metadata schemas that describe them. The key to integrated use of such a variety of collections in a digital library is collection metadata that represents the inherent and contextual characteristics of a collection. The Alexandria Digital Library (ADL) Project has designed and implemented collection metadata for several purposes: in XML form, the collection metadata "registers" the collection with the user interface client; in HTML form, it is used for user documentation; eventually, it will be used to describe the collection to network search agents; and it is used for internal collection management, including mapping the object metadata attributes to the common search parameters of the system.

  16. METADATA REGISTRY, ISO/IEC 11179

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pon, R K; Buttler, D J

    2008-01-03

    ISO/IEC-11179 is an international standard that documents the standardization and registration of metadata to make data understandable and shareable. This standardization and registration allows for easier locating, retrieving, and transmitting data from disparate databases. The standard defines the how metadata are conceptually modeled and how they are shared among parties, but does not define how data is physically represented as bits and bytes. The standard consists of six parts. Part 1 provides a high-level overview of the standard and defines the basic element of a metadata registry - a data element. Part 2 defines the procedures for registering classification schemesmore » and classifying administered items in a metadata registry (MDR). Part 3 specifies the structure of an MDR. Part 4 specifies requirements and recommendations for constructing definitions for data and metadata. Part 5 defines how administered items are named and identified. Part 6 defines how administered items are registered and assigned an identifier.« less

  17. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  18. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    PubMed Central

    2013-01-01

    Abstract Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity) have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC) extraction and mapping. PMID:24723768

  19. The Energy Industry Profile of ISO/DIS 19115-1: Facilitating Discovery and Evaluation of, and Access to Distributed Information Resources

    NASA Astrophysics Data System (ADS)

    Hills, S. J.; Richard, S. M.; Doniger, A.; Danko, D. M.; Derenthal, L.; Energistics Metadata Work Group

    2011-12-01

    A diverse group of organizations representative of the international community involved in disciplines relevant to the upstream petroleum industry, - energy companies, - suppliers and publishers of information to the energy industry, - vendors of software applications used by the industry, - partner government and academic organizations, has engaged in the Energy Industry Metadata Standards Initiative. This Initiative envisions the use of standard metadata within the community to enable significant improvements in the efficiency with which users discover, evaluate, and access distributed information resources. The metadata standard needed to realize this vision is the initiative's primary deliverable. In addition to developing the metadata standard, the initiative is promoting its adoption to accelerate realization of the vision, and publishing metadata exemplars conformant with the standard. Implementation of the standard by community members, in the form of published metadata which document the information resources each organization manages, will allow use of tools requiring consistent metadata for efficient discovery and evaluation of, and access to, information resources. While metadata are expected to be widely accessible, access to associated information resources may be more constrained. The initiative is being conducting by Energistics' Metadata Work Group, in collaboration with the USGIN Project. Energistics is a global standards group in the oil and natural gas industry. The Work Group determined early in the initiative, based on input solicited from 40+ organizations and on an assessment of existing metadata standards, to develop the target metadata standard as a profile of a revised version of ISO 19115, formally the "Energy Industry Profile of ISO/DIS 19115-1 v1.0" (EIP). The Work Group is participating on the ISO/TC 211 project team responsible for the revision of ISO 19115, now ready for "Draft International Standard" (DIS) status. With ISO 19115 an established, capability-rich, open standard for geographic metadata, EIP v1 is expected to be widely acceptable within the community and readily sustainable over the long-term. The EIP design, also per community requirements, will enable discovery, evaluation, and access to types of information resources considered important to the community, including structured and unstructured digital resources, and physical assets such as hardcopy documents and material samples. This presentation will briefly review the development of this initiative as well as the current and planned Work Group activities. More time will be spent providing an overview of the EIP v1, including the requirements it prescribes, design efforts made to enable automated metadata capture and processing, and the structure and content of its documentation, which was written to minimize ambiguity and facilitate implementation. The Work Group considers EIP v1 a solid initial design for interoperable metadata, and first step toward the vision of the Initiative.

  20. Rescue, Archival and Discovery of Tsunami Events on Marigrams

    NASA Astrophysics Data System (ADS)

    Eble, M. C.; Wright, L. M.; Stroker, K. J.; Sweeney, A.; Lancaster, M.

    2017-12-01

    The Big Earth Data Initiative made possible the reformatting of paper marigram records on which were recorded measurements of the 1946, 1952, 1960, and 1964 tsunamis generated in the Pacific Ocean. Data contained within each record were determined to be invaluable for tsunami researchers and operational agencies with a responsibility for issuing warnings during a tsunami event. All marigrams were carefully digitized and metadata were generated to form numerical datasets in order to provide the tsunami and other research and application-driven communities with quality data. Data were then packaged as CF-compliant netCDF datafiles and submitted to the NOAA Centers for Environmental Information for long-term stewardship, archival, and public discovery of both original scanned images and data in digital netCDF and CSC formats. The PNG plots of each time series were generated and included with data packages to provide a visual representation of the numerical data sets. ISO-compliant metadata were compiled for the collection at the event level and individual DOIs were minted for each of the four events included in this project. The procedure followed to reformat each record in this four-event subset of the larger NCEI scanned marigram inventory is presented and discussed. The practical use of these data is presented to highlight that even infrequent measurements of tsunamis hold information that may potentially help constrain earthquake rupture area, provide estimates of earthquake co-seismic slip distribution, identify subsidence or uplift, and significantly increase the holdings of situ data available for tsunami model validation. These same data may also prove valuable to the broader global tide community for validation and further development of tide models and for investigation into the stability of tidal harmonic constants. Data reformatted as part of this project are PARR compliant and meet the requirements for Data Management, Discoverability, Accessibility, Documentation, Readability, and Data Preservation and Stewardship as per the Big Earth Data Initiative.

  1. VAPOR PRESSURES, LIQUID MOLAR VOLUMES, VAPOR NON- IDEALITIES, AND CRITICAL PROPERTIES OF SOME FLUORINATED ETHERS: CF3OCF2OCF3, CF3OCF2 CF2H, c-CF2CF2CF2O, CF3OCF2H, AND CF3OCH3; AND OF CCl3F AND CF2ClH

    EPA Science Inventory

    Vapor pressures, compressibilities, expansivities, and molar volumes of the liquid phase have been measured between room temperature and the critical temperature for a series of fluorinated ethers: CF3OCF2OCF3, CF3OCF2CF2H, c-CF2CF2CF2O, CF3OCF2H, and CF3OCH3. Vapor-phase non-ide...

  2. Using a linked data approach to aid development of a metadata portal to support Marine Strategy Framework Directive (MSFD) implementation

    NASA Astrophysics Data System (ADS)

    Wood, Chris

    2016-04-01

    Under the Marine Strategy Framework Directive (MSFD), EU Member States are mandated to achieve or maintain 'Good Environmental Status' (GES) in their marine areas by 2020, through a series of Programme of Measures (PoMs). The Celtic Seas Partnership (CSP), an EU LIFE+ project, aims to support policy makers, special-interest groups, users of the marine environment, and other interested stakeholders on MSFD implementation in the Celtic Seas geographical area. As part of this support, a metadata portal has been built to provide a signposting service to datasets that are relevant to MSFD within the Celtic Seas. To ensure that the metadata has the widest possible reach, a linked data approach was employed to construct the database. Although the metadata are stored in a traditional RDBS, the metadata are exposed as linked data via the D2RQ platform, allowing virtual RDF graphs to be generated. SPARQL queries can be executed against the end-point allowing any user to manipulate the metadata. D2RQ's mapping language, based on turtle, was used to map a wide range of relevant ontologies to the metadata (e.g. The Provenance Ontology (prov-o), Ocean Data Ontology (odo), Dublin Core Elements and Terms (dc & dcterms), Friend of a Friend (foaf), and Geospatial ontologies (geo)) allowing users to browse the metadata, either via SPARQL queries or by using D2RQ's HTML interface. The metadata were further enhanced by mapping relevant parameters to the NERC Vocabulary Server, itself built on a SPARQL endpoint. Additionally, a custom web front-end was built to enable users to browse the metadata and express queries through an intuitive graphical user interface that requires no prior knowledge of SPARQL. As well as providing means to browse the data via MSFD-related parameters (Descriptor, Criteria, and Indicator), the metadata records include the dataset's country of origin, the list of organisations involved in the management of the data, and links to any relevant INSPIRE-compliant services relating to the dataset. The web front-end therefore enables users to effectively filter, sort, or search the metadata. As the MSFD timeline requires Member States to review their progress on achieving or maintaining GES every six years, the timely development of this metadata portal will not only aid interested stakeholders in understanding how member states are meeting their targets, but also shows how linked data can be used effectively to support policy makers and associated legislative bodies.

  3. THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Devarakonda, Ranjeet; Shrestha, Biva; Palanisamy, Giri

    Nobody is better suited to describe data than the scientist who created it. This description about a data is called Metadata. In general terms, Metadata represents the who, what, when, where, why and how of the dataset [1]. eXtensible Markup Language (XML) is the preferred output format for metadata, as it makes it portable and, more importantly, suitable for system discoverability. The newly developed ORNL Metadata Editor (OME) is a Web-based tool that allows users to create and maintain XML files containing key information, or metadata, about the research. Metadata include information about the specific projects, parameters, time periods, andmore » locations associated with the data. Such information helps put the research findings in context. In addition, the metadata produced using OME will allow other researchers to find these data via Metadata clearinghouses like Mercury [2][4]. OME is part of ORNL s Mercury software fleet [2][3]. It was jointly developed to support projects funded by the United States Geological Survey (USGS), U.S. Department of Energy (DOE), National Aeronautics and Space Administration (NASA) and National Oceanic and Atmospheric Administration (NOAA). OME s architecture provides a customizable interface to support project-specific requirements. Using this new architecture, the ORNL team developed OME instances for USGS s Core Science Analytics, Synthesis, and Libraries (CSAS&L), DOE s Next Generation Ecosystem Experiments (NGEE) and Atmospheric Radiation Measurement (ARM) Program, and the international Surface Ocean Carbon Dioxide ATlas (SOCAT). Researchers simply use the ORNL Metadata Editor to enter relevant metadata into a Web-based form. From the information on the form, the Metadata Editor can create an XML file on the server that the editor is installed or to the user s personal computer. Researchers can also use the ORNL Metadata Editor to modify existing XML metadata files. As an example, an NGEE Arctic scientist use OME to register their datasets to the NGEE data archive and allows the NGEE archive to publish these datasets via a data search portal (http://ngee.ornl.gov/data). These highly descriptive metadata created using OME allows the Archive to enable advanced data search options using keyword, geo-spatial, temporal and ontology filters. Similarly, ARM OME allows scientists or principal investigators (PIs) to submit their data products to the ARM data archive. How would OME help Big Data Centers like the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC)? The ORNL DAAC is one of NASA s Earth Observing System Data and Information System (EOSDIS) data centers managed by the Earth Science Data and Information System (ESDIS) Project. The ORNL DAAC archives data produced by NASA's Terrestrial Ecology Program. The DAAC provides data and information relevant to biogeochemical dynamics, ecological data, and environmental processes, critical for understanding the dynamics relating to the biological, geological, and chemical components of the Earth's environment. Typically data produced, archived and analyzed is at a scale of multiple petabytes, which makes the discoverability of the data very challenging. Without proper metadata associated with the data, it is difficult to find the data you are looking for and equally difficult to use and understand the data. OME will allow data centers like the NGEE and ORNL DAAC to produce meaningful, high quality, standards-based, descriptive information about their data products in-turn helping with the data discoverability and interoperability. Useful Links: USGS OME: http://mercury.ornl.gov/OME/ NGEE OME: http://ngee-arctic.ornl.gov/ngeemetadata/ ARM OME: http://archive2.ornl.gov/armome/ Contact: Ranjeet Devarakonda (devarakondar@ornl.gov) References: [1] Federal Geographic Data Committee. Content standard for digital geospatial metadata. Federal Geographic Data Committee, 1998. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94. [3] Wilson, B. E., Palanisamy, G., Devarakonda, R., Rhyne, B. T., Lindsley, C., & Green, J. (2010). Mercury Toolset for Spatiotemporal Metadata. [4] Pouchard, L. C., Branstetter, M. L., Cook, R. B., Devarakonda, R., Green, J., Palanisamy, G., ... & Noy, N. F. (2013). A Linked Science investigation: enhancing climate change data discovery with semantic technologies. Earth science informatics, 6(3), 175-185.« less

  4. A document centric metadata registration tool constructing earth environmental data infrastructure

    NASA Astrophysics Data System (ADS)

    Ichino, M.; Kinutani, H.; Ono, M.; Shimizu, T.; Yoshikawa, M.; Masuda, K.; Fukuda, K.; Kawamoto, H.

    2009-12-01

    DIAS (Data Integration and Analysis System) is one of GEOSS activities in Japan. It is also a leading part of the GEOSS task with the same name defined in GEOSS Ten Year Implementation Plan. The main mission of DIAS is to construct data infrastructure that can effectively integrate earth environmental data such as observation data, numerical model outputs, and socio-economic data provided from the fields of climate, water cycle, ecosystem, ocean, biodiversity and agriculture. Some of DIAS's data products are available at the following web site of http://www.jamstec.go.jp/e/medid/dias. Most of earth environmental data commonly have spatial and temporal attributes such as the covering geographic scope or the created date. The metadata standards including these common attributes are published by the geographic information technical committee (TC211) in ISO (the International Organization for Standardization) as specifications of ISO 19115:2003 and 19139:2007. Accordingly, DIAS metadata is developed with basing on ISO/TC211 metadata standards. From the viewpoint of data users, metadata is useful not only for data retrieval and analysis but also for interoperability and information sharing among experts, beginners and nonprofessionals. On the other hand, from the viewpoint of data providers, two problems were pointed out after discussions. One is that data providers prefer to minimize another tasks and spending time for creating metadata. Another is that data providers want to manage and publish documents to explain their data sets more comprehensively. Because of solving these problems, we have been developing a document centric metadata registration tool. The features of our tool are that the generated documents are available instantly and there is no extra cost for data providers to generate metadata. Also, this tool is developed as a Web application. So, this tool does not demand any software for data providers if they have a web-browser. The interface of the tool provides the section titles of the documents and by filling out the content of each section, the documents for the data sets are automatically published in PDF and HTML format. Furthermore, the metadata XML file which is compliant with ISO19115 and ISO19139 is created at the same moment. The generated metadata are managed in the metadata database of the DIAS project, and will be used in various ISO19139 compliant metadata management tools, such as GeoNetwork.

  5. Historic American Landscapes Survey: Arco Naval Proving Ground (Idaho National Laboratory)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Christina; Holmer, Marie; Gilbert, Hollie

    Based on historical evaluations in 1993 and 1997, historians determined that the then-remaining Arco NPG structures were significant to the nation’s history through their association with World War II . Through ensuing discussions with the SHPO, it was further determined that the infrastructure and associated landscape were also significant. According to provisions of INL’s Cultural Resource Management Plan (CRMP) as legitimized through a 2004 Programmatic Agreement between DOE-ID, the Idaho State Historic Preservation Office (SHPO), and Advisory Council on Historic Preservation (ACHP) historians identified the World War II structures as DOE “Signature Properties”. As defined by DOE-HQ, Signature Properties “denotemore » its [DOE’s] most historically important properties across the complex…and/or those properties that are viewed as having tourism potential.” The INL is a secure site and the INL land and structures are not accessible to the public and, therefore have no “tourism potential”. Although DOE-ID actively sought other uses for the vacant, unused buildings, none were identified and the buildings present safety and health concerns. A condition assessment found lead based paint, asbestos, rodent infestation/droppings, small animal carcasses, mold, and, in CF-633, areas of radiological contamination. In early 2013, DOE-ID notified the Idaho SHPO, ACHP, and, as required by the INL CRMP and PA, DOE-Headquarters Federal Preservation Officer, of their intent to demolish the vacant buildings (CF-606, CF-607, CF-613, CF-632, and CF-633). The proposed “end-state” of the buildings will be either grass and/or gravel pads. Through the NHPA Section 106 consultation process, measures to mitigate the adverse impacts of demolition were determined and agreed to through a Memorandum of Agreement (MOA) between DOE-ID, SHPO, and ACHP. The measures include the development and installation of interpretive signs to be placed at a publicly accessible location, retention of original components of CF-633, and completion of this HALS standard format report. Buildings, infrastructure, and features that are not scheduled for removal are documented here as well as properties that are scheduled for removal and the overall Arco NPG landscape. The Arco NPG, located in the remote high-desert of eastern Idaho aided in the defense and eventual ally victory in the Pacific Theater of World War II, in addition to revising national standards for the safe storage and transport of conventional ordnance.« less

  6. A prospective clinical trial to compare the performance of dried blood spots prenatal screening for Down's syndrome with conventional non-invasive testing technology.

    PubMed

    Hu, Huiying; Jiang, Yulin; Zhang, Minghui; Liu, Shanying; Hao, Na; Zhou, Jing; Liu, Juntao; Zhang, Xiaojin; Ma, Liangkun

    2017-03-01

    To evaluate, side by side, the efficiency of dried blood spots (DBSs) against serum screening for Down's syndrome, and then, to construct a two-tier strategy by topping up the fetal cell-free DNA (cfDNA) secondary screening over the high-risk women marked by the primary blood testing to build a practical screening tactic to identify fetal Down's syndrome. One thousand eight hundred and thirty-seven low-risk Chinese women, with singleton pregnancy, were enrolled for the study. Alpha-fetoprotein and free beta human chorionic gonadotropin were measured for the serum as well as for the parallel DBS samples. Partial high-risk pregnant women identified by primary blood testing (n = 38) were also subject to the secondary cfDNA screening. Diagnostic amniocentesis was utilized to confirm the screening results. The true positive rate for Down's syndrome detection was 100% for both blood screening methods; however, the false-positive rate was 3.0% for DBS and 4.0% for serum screening, respectively. DBS correlated well with serum screening on Down's syndrome detection. Three out of 38 primary high-risk women displayed chromosomal abnormalities by cfDNA analysis, which were confirmed by amniocentesis. Either the true detection rate or the false-positive rate for Down's syndrome between DBS and the serum test is comparable. In addition, blood primary screening aligned with secondary cfDNA analysis, a "before and after" two-tier screening strategy, can massively decrease the false-positive rate, which, then, dramatically reduces the demand for invasive diagnostic operation. Impact statement Children born with Down's syndrome display a wide range of mental and physical disability. Currently, there is no effective treatment to ease the burden and anxiety of the Down's syndrome family and the surrounding society. This study is to evaluate the efficiency of dried blood spots against serum screening for Down's syndrome and to construct a two-tier strategy by topping up the fetal cell-free DNA (cfDNA) secondary screening over the high-risk women marked by the primary blood testing to build a practical screening tactic to identify fetal Down's syndrome. Results demonstrate that fetal cfDNA can significantly reduce false-positive rate close to none while distinguishing all true positives. Thus, we recommend that fetal cfDNA analysis to be utilized as a secondary screening tool atop of the primary blood protein screening to further minimize the capacity of undesirable invasive diagnostic operations.

  7. A metadata-driven approach to data repository design.

    PubMed

    Harvey, Matthew J; McLean, Andrew; Rzepa, Henry S

    2017-01-01

    The design and use of a metadata-driven data repository for research data management is described. Metadata is collected automatically during the submission process whenever possible and is registered with DataCite in accordance with their current metadata schema, in exchange for a persistent digital object identifier. Two examples of data preview are illustrated, including the demonstration of a method for integration with commercial software that confers rich domain-specific data analytics without introducing customisation into the repository itself.

  8. Stop the Bleeding: the Development of a Tool to Streamline NASA Earth Science Metadata Curation Efforts

    NASA Astrophysics Data System (ADS)

    le Roux, J.; Baker, A.; Caltagirone, S.; Bugbee, K.

    2017-12-01

    The Common Metadata Repository (CMR) is a high-performance, high-quality repository for Earth science metadata records, and serves as the primary way to search NASA's growing 17.5 petabytes of Earth science data holdings. Released in 2015, CMR has the capability to support several different metadata standards already being utilized by NASA's combined network of Earth science data providers, or Distributed Active Archive Centers (DAACs). The Analysis and Review of CMR (ARC) Team located at Marshall Space Flight Center is working to improve the quality of records already in CMR with the goal of making records optimal for search and discovery. This effort entails a combination of automated and manual review, where each NASA record in CMR is checked for completeness, accuracy, and consistency. This effort is highly collaborative in nature, requiring communication and transparency of findings amongst NASA personnel, DAACs, the CMR team and other metadata curation teams. Through the evolution of this project it has become apparent that there is a need to document and report findings, as well as track metadata improvements in a more efficient manner. The ARC team has collaborated with Element 84 in order to develop a metadata curation tool to meet these needs. In this presentation, we will provide an overview of this metadata curation tool and its current capabilities. Challenges and future plans for the tool will also be discussed.

  9. Social tagging in the life sciences: characterizing a new metadata resource for bioinformatics.

    PubMed

    Good, Benjamin M; Tennis, Joseph T; Wilkinson, Mark D

    2009-09-25

    Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags) and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags) per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search) to benefit substantially from socially generated metadata in the life sciences, more documents need to be tagged and more tags are needed for each document. These issues may be addressed both by finding ways to attract more users to current systems and by creating new user interfaces that encourage more collectively useful individual tagging behaviour.

  10. A Metadata Element Set for Project Documentation

    NASA Technical Reports Server (NTRS)

    Hodge, Gail; Templeton, Clay; Allen, Robert B.

    2003-01-01

    Abstract NASA Goddard Space Flight Center is a large engineering enterprise with many projects. We describe our efforts to develop standard metadata sets across project documentation which we term the "Goddard Core". We also address broader issues for project management metadata.

  11. Sawdust and Bark-Based Substrates for Soilless Strawberry Production: Irrigation and Electrical Conductivity Management.

    PubMed

    Depardieu, Claire; Prémont, Valérie; Boily, Carole; Caron, Jean

    2016-01-01

    The objective of this work was to optimize a soilless growing system for producing bare-root strawberry transplants in three organic substrates. Three trials were conducted in the Quebec City area to determine the productivity potential of a peat-sawdust mixture (PS25) and an aged bark (AB) material compared to conventional coconut fiber (CF) substrate. A first experiment was carried out to define appropriate irrigation set points for each substrate that allowed optimal plant growth and fruit yields. For all substrates, wetter conditions (irrigation started at -1.0 kPa for CF; -1.5 kPa for AB and PS25, relative to -1.5 kPa for CF; -2.5 kPa for AB and PS25) enhanced plant growth and fruit production. The second trial was carried out to test the productivity potential for commercial production of the three substrates using high-tunnels. After the addition of an initial fertilizer application to PS25, we successfully established bare-root plants that gave similar fruit yields than those in CF and AB. The productivity potential of PS25 and AB were further confirmed during a third trial under greenhouse conditions. The critical factor for plant establishment in PS25 was attributed to consistent N, P and S immobilization by microorganisms, as well as the retention of other elements (Mg2+, K+) in the growth media. Taken together, our results showed that PS25 and AB are promising alternative substrates to coconut coir dust for strawberry cultivation. This paper also provides a useful guide for strawberry cultivation in Quebec, and suggests future research that might be conducted to optimize soilless systems for cold-climate strawberry production in Northern America.

  12. A study on the magnetic susceptibilities and optical absorption spectra on single crystals of Gd(III) pyrogermanate

    NASA Astrophysics Data System (ADS)

    Kundu, T.; Ghosh, D.; Wanklyn, B. M.

    1990-04-01

    The paper reports for the first time the experimental results of the measurements of magnetic susceptibilities ( K⊥ and K|) and their anisotropy (Δ K) between 300 and 21.8 K and the optical absorption spectra (UV region) at 12.5 K on single crystals of gadolinium pyrogermanate (GdPG). The anisotropy, which is only 211×10 -6 emu/mol at room temperature and increases by two orders of magnitude at 21 K, is predominantly a crystal field (CF) effect on the 8S {7}/{2} ground term, through higher order perturbations. Interpretation of the observed magnetic data was carried out by considering a conventional spin Hamiltonian ( Hs) to derive expressions for K⊥ and K| in terms of four effective crystal field parameters (ECFP). The value s of ECFP were varied to obtain a very close fitting between the theoretical and experimental values of K⊥, K|, δ K and K¯ The splitting of the 8S {7}/{2} term corresponding to these values of ECFP was found to be large, which suggests a strong CF effect in GdPG, as also observed in other RPG crystal studied earlier. The thermal characteristics of the magnetic anisotropy below 30 K deviate by about 5% which could not be explained by CF effects alone. A series expansion method was adopted to analyse the results of K⊥ and K| below 30 K, however the corresponding coefficient B2α and B3α were observed t o be unusually high indicating the presence of CF effect even in this temperature region. The Schottky specific heat, Csch, between 300 and 21 K for GdPG has been calculated and this shows a maximum at Tmax=17 K.

  13. Associations of Triiodothyronine Levels with Carotid Atherosclerosis and Arterial Stiffness in Hemodialysis Patients

    PubMed Central

    Kircelli, Fatih; Asci, Gulay; Carrero, Juan Jesus; Gungor, Ozkan; Demirci, Meltem Sezis; Ozbek, Suha Sureyya; Ceylan, Naim; Ozkahya, Mehmet; Toz, Huseyin; Ok, Ercan

    2011-01-01

    Summary Background and objectives End-stage renal disease is linked to alterations in thyroid hormone levels and/or metabolism, resulting in a high prevalence of subclinical hypothyroidism and low triiodothyronine (T3) levels. These alterations are involved in endothelial damage, cardiac abnormalities, and inflammation, but the exact mechanisms are unclear. In this study, we investigated the relationship between serum free-T3 (fT3) and carotid artery atherosclerosis, arterial stiffness, and vascular calcification in prevalent patients on conventional hemodialysis. Design, setting, participants, & measurements 137 patients were included. Thyroid-hormone levels were determined by chemiluminescent immunoassay, carotid artery–intima media thickness (CA-IMT) by Doppler ultrasonography, carotid-femoral pulse wave velocity (c-f PWV), and augmentation index by Sphygmocor device, and coronary artery calcification (CAC) scores by multi-slice computerized tomography. Results Mean fT3 level was 3.70 ± 1.23 pmol/L. Across decreasing fT3 tertiles, c-f PWV and CA-IMT values were incrementally higher, whereas CACs were not different. In adjusted ordinal logistic regression analysis, fT3 level (odds ratio, 0.81; 95% confidence interval, 0.68 to 0.97), age, and interdialytic weight gain were significantly associated with CA-IMT. fT3 level was associated with c-f PWV in nondiabetics but not in diabetics. In nondiabetics (n = 113), c-f PWV was positively associated with age and systolic BP but negatively with fT3 levels (odds ratio = 0.57, 95% confidence interval 0.39 to 0.83). Conclusions fT3 levels are inversely associated with carotid atherosclerosis but not with CAC in hemodialysis patients. Also, fT3 levels are inversely associated with surrogates of arterial stiffness in nondiabetics. PMID:21836150

  14. Hydrogel-based ultra-moisturizing cream formulation for skin hydration and enhanced dermal drug delivery.

    PubMed

    Lee, Sang Gon; Kim, Sung Rae; Cho, Hye In; Kang, Mean Hyung; Yeom, Dong Woo; Lee, Seo Hyun; Lee, Sangkil; Choi, Young Wook

    2014-01-01

    To develop an external vehicle for skin hydration and enhanced dermal drug delivery, a hydrogel-based ultra-moisturizing cream (HUMC) was successfully formulated with carbopol 934P, urea, Tinocare GL, grape seed oil, and other excipients. The HUMC showed plastic flow behavior due to a gel structure with a cream base. Different types of drug-free vehicles such as a hydrogel, conventional cream (CC), and three HUMCs were prepared and subjected to an in vivo skin hydration test on a hairless mouse using a corneometer. Hydration effect (∆AU) was in the order of HUMC2>HUMC1 ≥ CC>HUMC3>hydrogel. Using nile red (NR) and 5-carboxyfluorescein (5-CF) as lipophilic and hydrophilic fluorescent probes, respectively, in vitro skin permeation and accumulation studies were conducted using Franz diffusion cells. The values of steady-state flux (Jss, ng/h/cm(2)) were obtained: 74.8 (CC), 145.6 (HUMC1), and 161.9 (HUMC2) for NR delivery; 6.8 (CC), 8.3 (HUMC1), and 10.9 (HUMC2) for 5-CF delivery. The amounts retained in the skin at 12 h (Qr, ng/cm(2)) were determined: 86.4 (CC) and 102.0 (HUMC2) for NR; and 70.1 (CC) and 195.6 (HUMC2) for 5-CF. Confocal microscopy was used to visualize the distribution of the fluorescent probes. NR tended to be localized into the deeper part of the skin with adipose tissue whereas 5-CF localized in the upper layer of the skin. Thus we propose that HUMC2 is an efficacious vehicle for skin hydration and enhances dermal delivery of lipophilic and hydrophilic drugs.

  15. Sawdust and Bark-Based Substrates for Soilless Strawberry Production: Irrigation and Electrical Conductivity Management

    PubMed Central

    Depardieu, Claire; Caron, Jean

    2016-01-01

    The objective of this work was to optimize a soilless growing system for producing bare-root strawberry transplants in three organic substrates. Three trials were conducted in the Quebec City area to determine the productivity potential of a peat-sawdust mixture (PS25) and an aged bark (AB) material compared to conventional coconut fiber (CF) substrate. A first experiment was carried out to define appropriate irrigation set points for each substrate that allowed optimal plant growth and fruit yields. For all substrates, wetter conditions (irrigation started at -1.0 kPa for CF; -1.5 kPa for AB and PS25, relative to -1.5 kPa for CF; -2.5 kPa for AB and PS25) enhanced plant growth and fruit production. The second trial was carried out to test the productivity potential for commercial production of the three substrates using high-tunnels. After the addition of an initial fertilizer application to PS25, we successfully established bare-root plants that gave similar fruit yields than those in CF and AB. The productivity potential of PS25 and AB were further confirmed during a third trial under greenhouse conditions. The critical factor for plant establishment in PS25 was attributed to consistent N, P and S immobilization by microorganisms, as well as the retention of other elements (Mg2+, K+) in the growth media. Taken together, our results showed that PS25 and AB are promising alternative substrates to coconut coir dust for strawberry cultivation. This paper also provides a useful guide for strawberry cultivation in Quebec, and suggests future research that might be conducted to optimize soilless systems for cold-climate strawberry production in Northern America. PMID:27099949

  16. Toward the Standardization of Mycological Examination of Sputum Samples in Cystic Fibrosis: Results from a French Multicenter Prospective Study.

    PubMed

    Coron, Noémie; Pihet, Marc; Fréalle, Emilie; Lemeille, Yolande; Pinel, Claudine; Pelloux, Hervé; Gargala, Gilles; Favennec, Loic; Accoceberry, Isabelle; Durand-Joly, Isabelle; Dalle, Frédéric; Huet, Frédéric; Fanton, Annlyse; Boldron, Amale; Loeuille, Guy-André; Domblides, Philippe; Coltey, Bérengère; Pin, Isabelle; Llerena, Catherine; Troussier, Françoise; Person, Christine; Marguet, Christophe; Wizla, Nathalie; Thumerelle, Caroline; Turck, Dominique; Bui, Stéphanie; Fayon, Michael; Duhamel, Alain; Prévotat, Anne; Wallaert, Benoit; Leroy, Sylvie; Bouchara, Jean-Philippe; Delhaes, Laurence

    2018-02-01

    Fungal respiratory colonization of cystic fibrosis (CF) patients emerges as a new concern; however, the heterogeneity of mycological protocols limits investigations. We first aimed at setting up an efficient standardized protocol for mycological analysis of CF sputa that was assessed during a prospective, multicenter study: "MucoFong" program (PHRC-06/1902). Sputa from 243 CF patients from seven centers in France were collected over a 15-month period and submitted to a standardized protocol based on 6 semi-selective media. After mucolytic pretreatment, sputa were plated in parallel on cycloheximide-enriched (ACT37), erythritol-enriched (ERY37), benomyl dichloran-rose bengal (BENO37) and chromogenic (CAN37) media incubated at 37 °C and on Sabouraud-chloramphenicol (SAB27) and erythritol-enriched (ERY27) media incubated at 20-27 °C. Each plate was checked twice a week during 3 weeks. Fungi were conventionally identified; time for detection of fungal growth was noted for each species. Fungal prevalences and media performances were assessed; an optimal combination of media was determined using the Chi-squared automatic interaction detector method. At least one fungal species was isolated from 81% of sputa. Candida albicans was the most prevalent species (58.8%), followed by Aspergillus fumigatus (35.4%). Cultivation on CAN37, SAB27, ACT37 and ERY27 during 16 days provided an optimal combination, detecting C. albicans, A. fumigatus, Scedosporium apiospermum complex and Exophiala spp. with sensitivities of 96.5, 98.8, 100 and 100%. Combination of these four culture media is recommended to ensure the growth of key fungal pathogens in CF respiratory specimens. The use of such consensual protocol is of major interest for merging results from future epidemiological studies.

  17. Metadata for WIS and WIGOS: GAW Profile of ISO19115 and Draft WIGOS Core Metadata Standard

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Howe, Brian

    2014-05-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority to underpin all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). The development of the WIGOS Operational Information Resource (WIR) is central to the WIGOS Framework Implementation Plan (WIGOS-IP). The WIR shall provide information on WIGOS and its observing components, as well as requirements of WMO application areas. An important aspect is the description of the observational capabilities by way of structured metadata. The Global Atmosphere Watch is the WMO program addressing the chemical composition and selected physical properties of the atmosphere. Observational data are collected and archived by GAW World Data Centres (WDCs) and related data centres. The Task Team on GAW WDCs (ET-WDC) have developed a profile of the ISO19115 metadata standard that is compliant with the WMO Information System (WIS) specification for the WMO Core Metadata Profile v1.3. This profile is intended to harmonize certain aspects of the documentation of observations as well as the interoperability of the WDCs. The Inter-Commission-Group on WIGOS (ICG-WIGOS) has established the Task Team on WIGOS Metadata (TT-WMD) with representation of all WMO Technical Commissions and the objective to define the WIGOS Core Metadata. The result of this effort is a draft semantic standard comprising of a set of metadata classes that are considered to be of critical importance for the interpretation of observations relevant to WIGOS. The purpose of the presentation is to acquaint the audience with the standard and to solicit informal feed-back from experts in the various disciplines of meteorology and climatology. This feed-back will help ET-WDC and TT-WMD to refine the GAW metadata profile and the draft WIGOS metadata standard, thereby increasing their utility and acceptance.

  18. SIPSMetGen: It's Not Just For Aircraft Data and ECS Anymore.

    NASA Astrophysics Data System (ADS)

    Schwab, M.

    2015-12-01

    The SIPSMetGen utility, developed for the NASA EOSDIS project, under the EED contract, simplified the creation of file level metadata for the ECS System. The utility has been enhanced for ease of use, efficiency, speed and increased flexibility. The SIPSMetGen utility was originally created as a means of generating file level spatial metadata for Operation IceBridge. The first version created only ODL metadata, specific for ingest into ECS. The core strength of the utility was, and continues to be, its ability to take complex shapes and patterns of data collection point clouds from aircraft flights and simplify them to a relatively simple concave hull geo-polygon. It has been found to be a useful and easy to use tool for creating file level metadata for many other missions, both aircraft and satellite. While the original version was useful it had its limitations. In 2014 Raytheon was tasked to make enhancements to SIPSMetGen, this resulted a new version of SIPSMetGen which can create ISO Compliant XML metadata; provides optimization and streamlining of the algorithm for creating the spatial metadata; a quicker runtime with more consistent results; a utility that can be configured to run multi-threaded on systems with multiple processors. The utility comes with a java based graphical user interface to aid in configuration and running of the utility. The enhanced SIPSMetGen allows more diverse data sets to be archived with file level metadata. The advantage of archiving data with file level metadata is that it makes it easier for data users, and scientists to find relevant data. File level metadata unlocks the power of existing archives and metadata repositories such as ECS and CMR and search and discovery utilities like Reverb and Earth Data Search. Current missions now using SIPSMetGen include: Aquarius, Measures, ARISE, and Nimbus.

  19. Building a high level sample processing and quality assessment model for biogeochemical measurements: a case study from the ocean acidification community

    NASA Astrophysics Data System (ADS)

    Thomas, R.; Connell, D.; Spears, T.; Leadbetter, A.; Burger, E. F.

    2016-12-01

    The scientific literature heavily features small-scale studies with the impact of the results extrapolated to regional/global importance. There are on-going initiatives (e.g. OA-ICC, GOA-ON, GEOTRACES, EMODNet Chemistry) aiming to assemble regional to global-scale datasets that are available for trend or meta-analyses. Assessing the quality and comparability of these data requires information about the processing chain from "sampling to spreadsheet". This provenance information needs to be captured and readily available to assess data fitness for purpose. The NOAA Ocean Acidification metadata template was designed in consultation with domain experts for this reason; the core carbonate chemistry variables have 23-37 metadata fields each and for scientists generating these datasets there could appear to be an ever increasing amount of metadata expected to accompany a dataset. While this provenance metadata should be considered essential by those generating or using the data, for those discovering data there is a sliding scale between what is considered discovery metadata (title, abstract, contacts, etc.) versus usage metadata (methodology, environmental setup, lineage, etc.), the split depending on the intended use of data. As part of the OA-ICC's activities, the metadata fields from the NOAA template relevant to the sample processing chain and QA criteria have been factored to develop profiles for, and extensions to, the OM-JSON encoding supported by the PROV ontology. While this work started focused on carbonate chemistry variable specific metadata, the factorization could be applied within the O&M model across other disciplines such as trace metals or contaminants. In a linked data world with a suitable high level model for sample processing and QA available, tools and support can be provided to link reproducible units of metadata (e.g. the standard protocol for a variable as adopted by a community) and simplify the provision of metadata and subsequent discovery.

  20. Metadata Creation, Management and Search System for your Scientific Data

    NASA Astrophysics Data System (ADS)

    Devarakonda, R.; Palanisamy, G.

    2012-12-01

    Mercury Search Systems is a set of tools for creating, searching, and retrieving of biogeochemical metadata. Mercury toolset provides orders of magnitude improvements in search speed, support for any metadata format, integration with Google Maps for spatial queries, multi-facetted type search, search suggestions, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. Mercury's metadata editor provides a easy way for creating metadata and Mercury's search interface provides a single portal to search for data and information contained in disparate data management systems, each of which may use any metadata format including FGDC, ISO-19115, Dublin-Core, Darwin-Core, DIF, ECHO, and EML. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury is being used more than 14 different projects across 4 federal agencies. It was originally developed for NASA, with continuing development funded by NASA, USGS, and DOE for a consortium of projects. Mercury search won the NASA's Earth Science Data Systems Software Reuse Award in 2008. References: R. Devarakonda, G. Palanisamy, B.E. Wilson, and J.M. Green, "Mercury: reusable metadata management data discovery and access system", Earth Science Informatics, vol. 3, no. 1, pp. 87-94, May 2010. R. Devarakonda, G. Palanisamy, J.M. Green, B.E. Wilson, "Data sharing and retrieval using OAI-PMH", Earth Science Informatics DOI: 10.1007/s12145-010-0073-0, (2010);

  1. Master Metadata Repository and Metadata-Management System

    NASA Technical Reports Server (NTRS)

    Armstrong, Edward; Reed, Nate; Zhang, Wen

    2007-01-01

    A master metadata repository (MMR) software system manages the storage and searching of metadata pertaining to data from national and international satellite sources of the Global Ocean Data Assimilation Experiment (GODAE) High Resolution Sea Surface Temperature Pilot Project [GHRSSTPP]. These sources produce a total of hundreds of data files daily, each file classified as one of more than ten data products representing global sea-surface temperatures. The MMR is a relational database wherein the metadata are divided into granulelevel records [denoted file records (FRs)] for individual satellite files and collection-level records [denoted data set descriptions (DSDs)] that describe metadata common to all the files from a specific data product. FRs and DSDs adhere to the NASA Directory Interchange Format (DIF). The FRs and DSDs are contained in separate subdatabases linked by a common field. The MMR is configured in MySQL database software with custom Practical Extraction and Reporting Language (PERL) programs to validate and ingest the metadata records. The database contents are converted into the Federal Geographic Data Committee (FGDC) standard format by use of the Extensible Markup Language (XML). A Web interface enables users to search for availability of data from all sources.

  2. Preparation of new crosslinking agents and additives for use in polymer electrolyte membranes (PEMs) for fuel cell applications

    NASA Astrophysics Data System (ADS)

    Zhou, Yangliu

    The most commonly used proton conductive membrane in polymer electrolyte membrane fuel cells (PEMFC) and direct methanol fuel cells (DMFC) studies to date is DuPont's NafionRTM, which is a perfluorinated copolymer of tetrafluoroethylene (TFE) and perfluorovinyl ether with a pendant sulfonic acid group. A focus of this work is to find ways to improve the performance of NafionRTM membranes. Crosslinking the TFE chains of fluorinated ionomeric copolymers to improve their thermal and mechanical stability is a proven route to this goal. A straightforward synthetic route to perfluorinated divinyl ethers of the formula CF2=CFO(CF 2)3[OCF(CF3)CF2]mOCF=CF 2 (m = 0-1) has been demonstrated. The compounds CF2=CFO(CF 2)3OCF=CF2 and CF2=CFO(CF2) 3OCF(CF3)CF2OCF=CF2 were prepared and characterized by GC-MS, 13C and 19F NMR, and gas-IR spectroscopy. Synthetic routes to fluorosulfato-tetrafluoropropionyl fluoride [FSO3CF2CF2C(O)F] and difluoromalonyl difluoride [F(O)CCF2C(O)F] with improved yields were found. The second focus of the dissertation was the development of fluorous triarylphosphines for use as new doping materials for the modification of NafionRTM membranes and for use as ligands in catalysts for biphasic catalysis. The synthesis and characterization of a series of new polyhexafluoropropylene oxide derivatives for preparation of fluorous triarylphosphines and phosphonium salts was studied, such as F[CF(CF3)CF2O] 4CF(CF3)CH2CH2I, F[CF(CF3)CF 2O]4CF(CF3)CH=CH2, F[CF(CF3)CF 2O]4CF(CF3) CH2CH2C6H5, and F[CF(CF 3)CF2O]4CF(CF3)CH2CH 2C6H4Br. In a separate study, the photochlorination of 2,2,3,3-tetrafluoro-1-propanol (HCF2CF2CH2OH) and 2,2,3,3-tetrafluoropropyl 2,2,3,3-tetrafluoropropionate [HCF2CF2C(O)OCH2 CF2CF2H] with super diazo blue light (lambda max = 420 nm) were investigated. The photochemical products are different from those obtained under mercury light (lambda = 253.7nm). A new compound ClCF2CF2C(O)OC(H)ClCF2CF2Cl was prepared and characterized by GC-MS, elemental analysis, 1H, 13C and 19F NMR, and gas-IR spectroscopy.

  3. Brady's Geothermal Field Nodal Seismometers Metadata

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesley Parker

    Metadata for the nodal seismometer array deployed at the POROTOMO's Natural Laboratory in Brady Hot Spring, Nevada during the March 2016 testing. Metadata includes location and timing for each instrument as well as file lists of data to be uploaded in a separate submission.

  4. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    PubMed

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  5. Trends in the Evolution of the Public Web, 1998-2002; The Fedora Project: An Open-source Digital Object Repository Management System; State of the Dublin Core Metadata Initiative, April 2003; Preservation Metadata; How Many People Search the ERIC Database Each Day?

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Lavoie, Brian F.; Bennett, Rick; Staples, Thornton; Wayland, Ross; Payette, Sandra; Dekkers, Makx; Weibel, Stuart; Searle, Sam; Thompson, Dave; Rudner, Lawrence M.

    2003-01-01

    Includes five articles that examine key trends in the development of the public Web: size and growth, internationalization, and metadata usage; Flexible Extensible Digital Object and Repository Architecture (Fedora) for use in digital libraries; developments in the Dublin Core Metadata Initiative (DCMI); the National Library of New Zealand Te Puna…

  6. The use of fluidized sand bed as an innovative technique for heat treating aluminum based castings

    NASA Astrophysics Data System (ADS)

    Ragab, Khaled

    The current study was carried out to arrive at a better understanding of the influences of the fluidized sand bed heat treatment on the tensile properties and quality indices of A356.2 and B319.2 casting alloys. For the purposes of validating the use of fluidized sand bed furnaces in industrial applications for heat treatment of 356 and 319 castings, the tensile properties and the quality indices of these alloys were correlated with the most common metallurgical parameters, such as strontium modification, grain refining, solutionizing time, aging parameters and quenching media. Traditional heat treatment technology, employing circulating air convection furnaces, was used to establish a relevant comparison with fluidized sand beds for the heat treatment of the alloys investigated, employing T6 continuous aging cycles or multi-temperature aging cycles. Quality charts were used to predict and/or select the best heat treatment conditions and techniques to be applied in industry in order to obtain the optimum properties required for particular engineering applications. The results revealed that the strength values achieved in T6-tempered 319 and 356 alloys are more responsive to fluidized bed (FB) heat treatment than to conventional convection furnace (CF) treatment for solution treatment times of up to 8 hours. Beyond this solution time, no noticeable difference in properties is observed with the two techniques. A significant increase in strength is observed in the FB heat-treated samples after short aging times of 0.5 and 1 hour, the trend continuing up to 5 hours. The 319 alloys show signs of overaging after 8 hours of aging using a conventional furnace, whereas with a fluidized bed, overaging occurs after 12 hours. Analysis of the tensile properties in terms of quality index charts showed that both modified and non-modified 319 and 356 alloys display the same, or better, quality, after only a 2-hr treatment in an FB compared to 10 hours when using a CF. The quality values of the 356 alloys are more responsive to the FB technique than 319 alloys through long aging times of up to 5 hours. The 319 alloys heat-treated in an FB, however, show better quality values after 0.5 hour of aging and for solution treatment times of up to 5 hours than those treated using a CF. With regard to the quality charts of 319 alloys, heat-treated samples show that increasing the aging time up to peak-strength, i.e. 8 and 12 hours in a CF and an FB, respectively, results in increasing in the alloy strength with a decrease in the quality values, for each of the solution heat treatment times used. The statistical analysis of the results reveals that modification and heating rate of the heat treatment technique have the greatest positive effects on the quality values of the 356 alloys. The use of a fluidized sand bed for the direct quenching-aging treatment of A356.2 and B319.2 casting alloys yields greater UTS and YS values compared to conventional furnace quenched alloys. The strength values of T6 tempered A356 and B319 alloys are greater when quenched in water compared to those quenched in an FB or CF. For the same aging conditions (170°C/4h), the fluidized bed quenched-aged 319 and 356 alloys show nearly the same or better strength values than those quenched in water and then aged in a CF or an FB. Based on the quality charts developed for alloys subjected to different quenching media, higher quality index values are obtained by water-quenched T6-tempered A356 alloys, and conventional furnace quenched-aged T6-tempered B319 alloys, respectively. The modification factor has the most significant effect on the quality results of the alloys investigated, for all heat treatment cycles, as compared to other metallurgical parameters. The results of alloys subjected to multi-temperature aging cycles reveal that the strength results obtained after the T6 continuous aging treatment of A356 alloys are not improved by means of multi-temperature aging cycles, indicating therefore that the optimum properties are obtained using a T6 aging treatment. The optimum strength properties of B319.2 alloys, however, is obtained by applying multi-temperature aging cycles such as, for example, 230°C/2h followed by 180°C/8h, rather than T6 aging treatment. In the case of multi-temperature aging cycles, the modification factor has the most significant role in improving the quality index values of 356 and 319 alloys. The FB heat-treated alloys have the highest strength values for all heat treatment cycles compared to CF heat-treated alloys; however, the FB has no significant effect on the quality values of 319 alloys compared to the CF. Regarding the interaction plots for multi-temperature aging cycles, the most significant factors that have a positive effect on the quality values of 356 alloys are modification and the 230°C/2h + 180°C/8h multi-temperature aging cycle. (Abstract shortened by UMI.)

  7. The RBV metadata catalog

    NASA Astrophysics Data System (ADS)

    Andre, Francois; Fleury, Laurence; Gaillardet, Jerome; Nord, Guillaume

    2015-04-01

    RBV (Réseau des Bassins Versants) is a French initiative to consolidate the national efforts made by more than 15 elementary observatories funded by various research institutions (CNRS, INRA, IRD, IRSTEA, Universities) that study river and drainage basins. The RBV Metadata Catalogue aims at giving an unified vision of the work produced by every observatory to both the members of the RBV network and any external person interested by this domain of research. Another goal is to share this information with other existing metadata portals. Metadata management is heterogeneous among observatories ranging from absence to mature harvestable catalogues. Here, we would like to explain the strategy used to design a state of the art catalogue facing this situation. Main features are as follows : - Multiple input methods: Metadata records in the catalog can either be entered with the graphical user interface, harvested from an existing catalogue or imported from information system through simplified web services. - Hierarchical levels: Metadata records may describe either an observatory, one of its experimental site or a single dataset produced by one instrument. - Multilingualism: Metadata can be easily entered in several configurable languages. - Compliance to standards : the backoffice part of the catalogue is based on a CSW metadata server (Geosource) which ensures ISO19115 compatibility and the ability of being harvested (globally or partially). On going tasks focus on the use of SKOS thesaurus and SensorML description of the sensors. - Ergonomy : The user interface is built with the GWT Framework to offer a rich client application with a fully ajaxified navigation. - Source code sharing : The work has led to the development of reusable components which can be used to quickly create new metadata forms in other GWT applications You can visit the catalogue (http://portailrbv.sedoo.fr/) or contact us by email rbv@sedoo.fr.

  8. OntoStudyEdit: a new approach for ontology-based representation and management of metadata in clinical and epidemiological research.

    PubMed

    Uciteli, Alexandr; Herre, Heinrich

    2015-01-01

    The specification of metadata in clinical and epidemiological study projects absorbs significant expense. The validity and quality of the collected data depend heavily on the precise and semantical correct representation of their metadata. In various research organizations, which are planning and coordinating studies, the required metadata are specified differently, depending on many conditions, e.g., on the used study management software. The latter does not always meet the needs of a particular research organization, e.g., with respect to the relevant metadata attributes and structuring possibilities. The objective of the research, set forth in this paper, is the development of a new approach for ontology-based representation and management of metadata. The basic features of this approach are demonstrated by the software tool OntoStudyEdit (OSE). The OSE is designed and developed according to the three ontology method. This method for developing software is based on the interactions of three different kinds of ontologies: a task ontology, a domain ontology and a top-level ontology. The OSE can be easily adapted to different requirements, and it supports an ontologically founded representation and efficient management of metadata. The metadata specifications can by imported from various sources; they can be edited with the OSE, and they can be exported in/to several formats, which are used, e.g., by different study management software. Advantages of this approach are the adaptability of the OSE by integrating suitable domain ontologies, the ontological specification of mappings between the import/export formats and the DO, the specification of the study metadata in a uniform manner and its reuse in different research projects, and an intuitive data entry for non-expert users.

  9. EarthCube Data Discovery Hub: Enhancing, Curating and Finding Data across Multiple Geoscience Data Sources.

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Valentine, D.; Richard, S. M.; Gupta, A.; Meier, O.; Peucker-Ehrenbrink, B.; Hudman, G.; Stocks, K. I.; Hsu, L.; Whitenack, T.; Grethe, J. S.; Ozyurt, I. B.

    2017-12-01

    EarthCube Data Discovery Hub (DDH) is an EarthCube Building Block project using technologies developed in CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability) to enable geoscience users to explore a growing portfolio of EarthCube-created and other geoscience-related resources. Over 1 million metadata records are available for discovery through the project portal (cinergi.sdsc.edu). These records are retrieved from data facilities, including federal, state and academic sources, or contributed by geoscientists through workshops, surveys, or other channels. CINERGI metadata augmentation pipeline components 1) provide semantic enhancement based on a large ontology of geoscience terms, using text analytics to generate keywords with references to ontology classes, 2) add spatial extents based on place names found in the metadata record, and 3) add organization identifiers to the metadata. The records are indexed and can be searched via a web portal and standard search APIs. The added metadata content improves discoverability and interoperability of the registered resources. Specifically, the addition of ontology-anchored keywords enables faceted browsing and lets users navigate to datasets related by variables measured, equipment used, science domain, processes described, geospatial features studied, and other dataset characteristics that are generated by the pipeline. DDH also lets data curators access and edit the automatically generated metadata records using the CINERGI metadata editor, accept or reject the enhanced metadata content, and consider it in updating their metadata descriptions. We consider several complex data discovery workflows, in environmental seismology (quantifying sediment and water fluxes using seismic data), marine biology (determining available temperature, location, weather and bleaching characteristics of coral reefs related to measurements in a given coral reef survey), and river geochemistry (discovering observations relevant to geochemical measurements outside the tidal zone, given specific discharge conditions).

  10. Hypofractionated Nodal Radiation Therapy for Breast Cancer Was Not Associated With Increased Patient-Reported Arm or Brachial Plexopathy Symptoms.

    PubMed

    Leong, Nelson; Truong, Pauline T; Tankel, Keith; Kwan, Winkle; Weir, Lorna; Olivotto, Ivo A

    2017-12-01

    To determine whether nodal radiation therapy (RT) for breast cancer using modest hypofractionation (HF) with 2.25 to 2.5 Gy per fraction (fx) was associated with increased patient-reported arm symptoms, compared with conventional fractionation (CF) ≤2 Gy/fx. Two cancer registries were used to identify subjects who received computed tomography-planned nodal RT for pT1-3, pN0-2, M0 breast cancer, from 2007 to 2010 at 2 cancer institutions. After ethics approval, patients were mailed an explanatory letter and the Self-reported Arm Symptom Scale, a validated instrument with 8 questions about arm symptoms and 5 related to activities of daily living. Clinicopathologic characteristics and Self-reported Arm Symptom Scale scores were compared between HF/CF cohorts using nonparametric analysis, χ 2 analysis, and multivariate ordinal regression. Of 1759 patients, 800 (45.5%) returned a completed survey. A total of 708 eligible cases formed the study cohort. Of these, 406 (57%) received HFRT (40 Gy/16 fx, 45 Gy/20 fx), and 302 (43%) received CFRT (45-50 Gy/25 fx, 50.4 Gy/28 fx). Median time interval after RT was 5.7 years. Forty-three percent and 75% of patients received breast-conserving surgery and chemotherapy, respectively. Twenty-two percent received breast boost RT, independent of fractionation. Median age at diagnosis was 59 years (HF) and 53 years (CF) (P<.001). The mean numbers of excised (n=12) and involved (n=3) nodes were similar between fractionation cohorts (P=.44), as were the mean sums of responses in arm symptoms (P=.17) and activities of daily living (P=.85). Patients receiving HF reported lower rates of shoulder stiffness (P=.04), trouble moving the arm (P=.02), and difficulty reaching overhead (P<.01) compared with the CF cohort. There was no difference in self-reported arm swelling or symptoms related to brachial plexopathy. Nodal RT with hypofractionation was not associated with increased patient-reported arm symptoms or functional deficits compared with CF. Subjects treated with CF reported more disability in certain aspects of arm/shoulder function. These data support shorter fractionation utilization when regional nodes are within the therapeutic target. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Atmospheric abundance and global emissions of perfluorocarbons CF4, C2F6 and C3F8 since 1800 inferred from ice core, firn, air archive and in situ measurements

    NASA Astrophysics Data System (ADS)

    Trudinger, Cathy M.; Fraser, Paul J.; Etheridge, David M.; Sturges, William T.; Vollmer, Martin K.; Rigby, Matt; Martinerie, Patricia; Mühle, Jens; Worton, David R.; Krummel, Paul B.; Steele, L. Paul; Miller, Benjamin R.; Laube, Johannes; Mani, Francis S.; Rayner, Peter J.; Harth, Christina M.; Witrant, Emmanuel; Blunier, Thomas; Schwander, Jakob; O'Doherty, Simon; Battle, Mark

    2016-09-01

    Perfluorocarbons (PFCs) are very potent and long-lived greenhouse gases in the atmosphere, released predominantly during aluminium production and semiconductor manufacture. They have been targeted for emission controls under the United Nations Framework Convention on Climate Change. Here we present the first continuous records of the atmospheric abundance of CF4 (PFC-14), C2F6 (PFC-116) and C3F8 (PFC-218) from 1800 to 2014. The records are derived from high-precision measurements of PFCs in air extracted from polar firn or ice at six sites (DE08, DE08-2, DSSW20K, EDML, NEEM and South Pole) and air archive tanks and atmospheric air sampled from both hemispheres. We take account of the age characteristics of the firn and ice core air samples and demonstrate excellent consistency between the ice core, firn and atmospheric measurements. We present an inversion for global emissions from 1900 to 2014. We also formulate the inversion to directly infer emission factors for PFC emissions due to aluminium production prior to the 1980s. We show that 19th century atmospheric levels, before significant anthropogenic influence, were stable at 34.1 ± 0.3 ppt for CF4 and below detection limits of 0.002 and 0.01 ppt for C2F6 and C3F8, respectively. We find a significant peak in CF4 and C2F6 emissions around 1940, most likely due to the high demand for aluminium during World War II, for example for construction of aircraft, but these emissions were nevertheless much lower than in recent years. The PFC emission factors for aluminium production in the early 20th century were significantly higher than today but have decreased since then due to improvements and better control of the smelting process. Mitigation efforts have led to decreases in emissions from peaks in 1980 (CF4) or early-to-mid-2000s (C2F6 and C3F8) despite the continued increase in global aluminium production; however, these decreases in emissions appear to have recently halted. We see a temporary reduction of around 15 % in CF4 emissions in 2009, presumably associated with the impact of the global financial crisis on aluminium and semiconductor production.

  12. 75 FR 4689 - Electronic Tariff Filings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-29

    ... collaborative process relies upon the use of metadata (or information) about the tariff filing, including such... code.\\5\\ Because the Commission is using the electronic metadata to establish statutory action dates... code, as well as accurately providing any other metadata. 6. Similarly, the Commission will be using...

  13. The center for expanded data annotation and retrieval

    PubMed Central

    Bean, Carol A; Cheung, Kei-Hoi; Dumontier, Michel; Durante, Kim A; Gevaert, Olivier; Gonzalez-Beltran, Alejandra; Khatri, Purvesh; Kleinstein, Steven H; O’Connor, Martin J; Pouliot, Yannick; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Wiser, Jeffrey A

    2015-01-01

    The Center for Expanded Data Annotation and Retrieval is studying the creation of comprehensive and expressive metadata for biomedical datasets to facilitate data discovery, data interpretation, and data reuse. We take advantage of emerging community-based standard templates for describing different kinds of biomedical datasets, and we investigate the use of computational techniques to help investigators to assemble templates and to fill in their values. We are creating a repository of metadata from which we plan to identify metadata patterns that will drive predictive data entry when filling in metadata templates. The metadata repository not only will capture annotations specified when experimental datasets are initially created, but also will incorporate links to the published literature, including secondary analyses and possible refinements or retractions of experimental interpretations. By working initially with the Human Immunology Project Consortium and the developers of the ImmPort data repository, we are developing and evaluating an end-to-end solution to the problems of metadata authoring and management that will generalize to other data-management environments. PMID:26112029

  14. Establishing semantic interoperability of biomedical metadata registries using extended semantic relationships.

    PubMed

    Park, Yu Rang; Yoon, Young Jo; Kim, Hye Hyeon; Kim, Ju Han

    2013-01-01

    Achieving semantic interoperability is critical for biomedical data sharing between individuals, organizations and systems. The ISO/IEC 11179 MetaData Registry (MDR) standard has been recognized as one of the solutions for this purpose. The standard model, however, is limited. Representing concepts consist of two or more values, for instance, are not allowed including blood pressure with systolic and diastolic values. We addressed the structural limitations of ISO/IEC 11179 by an integrated metadata object model in our previous research. In the present study, we introduce semantic extensions for the model by defining three new types of semantic relationships; dependency, composite and variable relationships. To evaluate our extensions in a real world setting, we measured the efficiency of metadata reduction by means of mapping to existing others. We extracted metadata from the College of American Pathologist Cancer Protocols and then evaluated our extensions. With no semantic loss, one third of the extracted metadata could be successfully eliminated, suggesting better strategy for implementing clinical MDRs with improved efficiency and utility.

  15. What Information Does Your EHR Contain? Automatic Generation of a Clinical Metadata Warehouse (CMDW) to Support Identification and Data Access Within Distributed Clinical Research Networks.

    PubMed

    Bruland, Philipp; Doods, Justin; Storck, Michael; Dugas, Martin

    2017-01-01

    Data dictionaries provide structural meta-information about data definitions in health information technology (HIT) systems. In this regard, reusing healthcare data for secondary purposes offers several advantages (e.g. reduce documentation times or increased data quality). Prerequisites for data reuse are its quality, availability and identical meaning of data. In diverse projects, research data warehouses serve as core components between heterogeneous clinical databases and various research applications. Given the complexity (high number of data elements) and dynamics (regular updates) of electronic health record (EHR) data structures, we propose a clinical metadata warehouse (CMDW) based on a metadata registry standard. Metadata of two large hospitals were automatically inserted into two CMDWs containing 16,230 forms and 310,519 data elements. Automatic updates of metadata are possible as well as semantic annotations. A CMDW allows metadata discovery, data quality assessment and similarity analyses. Common data models for distributed research networks can be established based on similarity analyses.

  16. Separation of metadata and pixel data to speed DICOM tag morphing.

    PubMed

    Ismail, Mahmoud; Philbin, James

    2013-01-01

    The DICOM information model combines pixel data and metadata in single DICOM object. It is not possible to access the metadata separately from the pixel data. There are use cases where only metadata is accessed. The current DICOM object format increases the running time of those use cases. Tag morphing is one of those use cases. Tag morphing includes deletion, insertion or manipulation of one or more of the metadata attributes. It is typically used for order reconciliation on study acquisition or to localize the issuer of patient ID (IPID) and the patient ID attributes when data from one domain is transferred to a different domain. In this work, we propose using Multi-Series DICOM (MSD) objects, which separate metadata from pixel data and remove duplicate attributes, to reduce the time required for Tag Morphing. The time required to update a set of study attributes in each format is compared. The results show that the MSD format significantly reduces the time required for tag morphing.

  17. Do Community Recommendations Improve Metadata?

    NASA Astrophysics Data System (ADS)

    Gordon, S.; Habermann, T.; Jones, M. B.; Leinfelder, B.; Mecum, B.; Powers, L. A.; Slaughter, P.

    2016-12-01

    Complete documentation of scientific data is the surest way to facilitate discovery and reuse. What is complete metadata? There are many metadata recommendations from communities like the OGC, FGDC, NASA, and LTER, that can provide data documentation guidance for discovery, access, use and understanding. Often, the recommendations that communities develop are for a particular metadata dialect. Two examples of this are the LTER Completeness recommendation for EML and the FGDC Data Discovery recommendation for CSDGM. Can community adoption of a recommendation ensure that what is included in the metadata is understandable to the scientific community and beyond? By applying quantitative analysis to different LTER and USGS metadata collections in DataOne and ScienceBase, we show that community recommendations can improve the completeness of collections over time. Additionally, by comparing communities in DataOne that use the EML and CSDGM dialects, but have not adopted the recommendations to the communities that have, the positive effects of recommendation adoption on documentation completeness can be measured.

  18. Metadata Sets for e-Government Resources: The Extended e-Government Metadata Schema (eGMS+)

    NASA Astrophysics Data System (ADS)

    Charalabidis, Yannis; Lampathaki, Fenareti; Askounis, Dimitris

    In the dawn of the Semantic Web era, metadata appear as a key enabler that assists management of the e-Government resources related to the provision of personalized, efficient and proactive services oriented towards the real citizens’ needs. As different authorities typically use different terms to describe their resources and publish them in various e-Government Registries that may enhance the access to and delivery of governmental knowledge, but also need to communicate seamlessly at a national and pan-European level, the need for a unified e-Government metadata standard emerges. This paper presents the creation of an ontology-based extended metadata set for e-Government Resources that embraces services, documents, XML Schemas, code lists, public bodies and information systems. Such a metadata set formalizes the exchange of information between portals and registries and assists the service transformation and simplification efforts, while it can be further taken into consideration when applying Web 2.0 techniques in e-Government.

  19. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file or in a storage/archive system. We are working on adding many features to help editor users, such as auto completion, pre-populating of forms, partial saving, as well as automatic schema validation. In this presentation we will demonstrate a few sample editors, including an FGDC editor and a bare bone editor for ISO 19115/19139. We will also demonstrate the use of templates during the definition phase, with the support of export and import functions. Form pre-population and input validation will also be covered. Theses modules are available as open-source software from the Islandora software foundation, as a component of a larger Drupal-based data archive system. They can be easily installed as stand-alone system, or to be plugged into other existing metadata platforms.

  20. Improvement of thermal radiation characteristic of AC servomotor using Al-CNT composite material

    NASA Astrophysics Data System (ADS)

    Kikuchi, Y.; Wakiwaka, H.; Yanagihara, M.

    2018-02-01

    This study deals with a high thermal conductivity material of aluminum-carbon nanotube (CNT) composite with carbon fiber (CF) and the high radiation performance of AC servomotor using a stator made of nanotube composite material. The composite fabrication process was performed by melting a mixture of granular aluminum of less than 200 μm and CNT under conditions of pressed atmosphere at the same time. Two kinds of motors made using aluminum and the composite were evaluated to confirm the effect of thermal conductivity as the motor stator. A test rod of the composite with 14 wt% CF-7 wt% CNT-aluminum indicated the excellent thermal conductivity of 169 W/(mK) in the radial direction and 173 W/(mK) in the lengthwise direction. According to the obtained temperature radiation characteristic of the AC servomotor, the composite stator using CNT decreased the consumption energy to 16% compared to the conventional one. As a result, the highly efficient motor improved the radiation characteristic using the CNT composite stator.

  1. Highly-stable and -flexible graphene/(CF3SO2)2NH/graphene transparent conductive electrodes for organic solar cells

    NASA Astrophysics Data System (ADS)

    Seo, Sang Woo; Lee, Ha Seung; Shin, Dong Hee; Kim, Ju Hwan; Jang, Chan Wook; Kim, Jong Min; Kim, Sung; Choi, Suk-Ho

    2017-10-01

    We first employ highly-stable and -flexible (CF3SO2)2NH-doped graphene (TFSA/GR) and GR-encapsulated TFSA/GR (GR/TFSA/GR) transparent conductive electrodes (TCEs) prepared on polyethylene terephthalate substrates for flexible organic solar cells (OSCs). Compared to conventional indium tin oxide (ITO) TCEs, the TFSA-doped-GR TCEs show higher optical transmittance and larger sheet resistance. The TFSA/GR and GR/TFSA/GR TCEs show work functions of 4.89 ± 0.16 and 4.97 ± 0.18 eV, respectively, which are not only larger than those of the ITO TCEs but also indicate p-type doping of GR, and are therefore more suitable for anode TCEs of OSCs. In addition, typical GR/TFSA/GR-TCE OSCs are much more mechanically flexible than the ITO-TCE ones with their photovoltaic parameters being similar, as proved by bending tests as functions of cycle and curvature.

  2. A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations

    DOE PAGES

    Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley; ...

    2017-06-20

    Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less

  3. Handling Metadata in a Neurophysiology Laboratory

    PubMed Central

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G.; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework. PMID:27486397

  4. Handling Metadata in a Neurophysiology Laboratory.

    PubMed

    Zehl, Lyuba; Jaillet, Florent; Stoewer, Adrian; Grewe, Jan; Sobolev, Andrey; Wachtler, Thomas; Brochier, Thomas G; Riehle, Alexa; Denker, Michael; Grün, Sonja

    2016-01-01

    To date, non-reproducibility of neurophysiological research is a matter of intense discussion in the scientific community. A crucial component to enhance reproducibility is to comprehensively collect and store metadata, that is, all information about the experiment, the data, and the applied preprocessing steps on the data, such that they can be accessed and shared in a consistent and simple manner. However, the complexity of experiments, the highly specialized analysis workflows and a lack of knowledge on how to make use of supporting software tools often overburden researchers to perform such a detailed documentation. For this reason, the collected metadata are often incomplete, incomprehensible for outsiders or ambiguous. Based on our research experience in dealing with diverse datasets, we here provide conceptual and technical guidance to overcome the challenges associated with the collection, organization, and storage of metadata in a neurophysiology laboratory. Through the concrete example of managing the metadata of a complex experiment that yields multi-channel recordings from monkeys performing a behavioral motor task, we practically demonstrate the implementation of these approaches and solutions with the intention that they may be generalized to other projects. Moreover, we detail five use cases that demonstrate the resulting benefits of constructing a well-organized metadata collection when processing or analyzing the recorded data, in particular when these are shared between laboratories in a modern scientific collaboration. Finally, we suggest an adaptable workflow to accumulate, structure and store metadata from different sources using, by way of example, the odML metadata framework.

  5. A metadata reporting framework (FRAMES) for synthesis of ecohydrological observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christianson, Danielle S.; Varadharajan, Charuleka; Christoffersen, Bradley

    Metadata describe the ancillary information needed for data interpretation, comparison across heterogeneous datasets, and quality control and quality assessment (QA/QC). Metadata enable the synthesis of diverse ecohydrological and biogeochemical observations, an essential step in advancing a predictive understanding of earth systems. Environmental observations can be taken across a wide range of spatiotemporal scales in a variety of measurement settings and approaches, and saved in multiple formats. Thus, well-organized, consistent metadata are required to produce usable data products from diverse observations collected in disparate field sites. However, existing metadata reporting protocols do not support the complex data synthesis needs of interdisciplinarymore » earth system research. We developed a metadata reporting framework (FRAMES) to enable predictive understanding of carbon cycling in tropical forests under global change. FRAMES adheres to best practices for data and metadata organization, enabling consistent data reporting and thus compatibility with a variety of standardized data protocols. We used an iterative scientist-centered design process to develop FRAMES. The resulting modular organization streamlines metadata reporting and can be expanded to incorporate additional data types. The flexible data reporting format incorporates existing field practices to maximize data-entry efficiency. With FRAMES’s multi-scale measurement position hierarchy, data can be reported at observed spatial resolutions and then easily aggregated and linked across measurement types to support model-data integration. FRAMES is in early use by both data providers and users. Here in this article, we describe FRAMES, identify lessons learned, and discuss areas of future development.« less

  6. Streamlining geospatial metadata in the Semantic Web

    NASA Astrophysics Data System (ADS)

    Fugazza, Cristiano; Pepe, Monica; Oggioni, Alessandro; Tagliolato, Paolo; Carrara, Paola

    2016-04-01

    In the geospatial realm, data annotation and discovery rely on a number of ad-hoc formats and protocols. These have been created to enable domain-specific use cases generalized search is not feasible for. Metadata are at the heart of the discovery process and nevertheless they are often neglected or encoded in formats that either are not aimed at efficient retrieval of resources or are plainly outdated. Particularly, the quantum leap represented by the Linked Open Data (LOD) movement did not induce so far a consistent, interlinked baseline in the geospatial domain. In a nutshell, datasets, scientific literature related to them, and ultimately the researchers behind these products are only loosely connected; the corresponding metadata intelligible only to humans, duplicated on different systems, seldom consistently. Instead, our workflow for metadata management envisages i) editing via customizable web- based forms, ii) encoding of records in any XML application profile, iii) translation into RDF (involving the semantic lift of metadata records), and finally iv) storage of the metadata as RDF and back-translation into the original XML format with added semantics-aware features. Phase iii) hinges on relating resource metadata to RDF data structures that represent keywords from code lists and controlled vocabularies, toponyms, researchers, institutes, and virtually any description one can retrieve (or directly publish) in the LOD Cloud. In the context of a distributed Spatial Data Infrastructure (SDI) built on free and open-source software, we detail phases iii) and iv) of our workflow for the semantics-aware management of geospatial metadata.

  7. Multi-facetted Metadata - Describing datasets with different metadata schemas at the same time

    NASA Astrophysics Data System (ADS)

    Ulbricht, Damian; Klump, Jens; Bertelmann, Roland

    2013-04-01

    Inspired by the wish to re-use research data a lot of work is done to bring data systems of the earth sciences together. Discovery metadata is disseminated to data portals to allow building of customized indexes of catalogued dataset items. Data that were once acquired in the context of a scientific project are open for reappraisal and can now be used by scientists that were not part of the original research team. To make data re-use easier, measurement methods and measurement parameters must be documented in an application metadata schema and described in a written publication. Linking datasets to publications - as DataCite [1] does - requires again a specific metadata schema and every new use context of the measured data may require yet another metadata schema sharing only a subset of information with the meta information already present. To cope with the problem of metadata schema diversity in our common data repository at GFZ Potsdam we established a solution to store file-based research data and describe these with an arbitrary number of metadata schemas. Core component of the data repository is an eSciDoc infrastructure that provides versioned container objects, called eSciDoc [2] "items". The eSciDoc content model allows assigning files to "items" and adding any number of metadata records to these "items". The eSciDoc items can be submitted, revised, and finally published, which makes the data and metadata available through the internet worldwide. GFZ Potsdam uses eSciDoc to support its scientific publishing workflow, including mechanisms for data review in peer review processes by providing temporary web links for external reviewers that do not have credentials to access the data. Based on the eSciDoc API, panMetaDocs [3] provides a web portal for data management in research projects. PanMetaDocs, which is based on panMetaWorks [4], is a PHP based web application that allows to describe data with any XML-based schema. It uses the eSciDoc infrastructures REST-interface to store versioned dataset files and metadata in a XML-format. The software is able to administrate more than one eSciDoc metadata record per item and thus allows the description of a dataset according to its context. The metadata fields can be filled with static or dynamic content to reduce the number of fields that require manual entries to a minimum and, at the same time, make use of contextual information available in a project setting. Access rights can be adjusted to set visibility of datasets to the required degree of openness. Metadata from separate instances of panMetaDocs can be syndicated to portals through RSS and OAI-PMH interfaces. The application architecture presented here allows storing file-based datasets and describe these datasets with any number of metadata schemas, depending on the intended use case. Data and metadata are stored in the same entity (eSciDoc items) and are managed by a software tool through the eSciDoc REST interface - in this case the application is panMetaDocs. Other software may re-use the produced items and modify the appropriate metadata records by accessing the web API of the eSciDoc data infrastructure. For presentation of the datasets in a web browser we are not bound to panMetaDocs. This is done by stylesheet transformation of the eSciDoc-item. [1] http://www.datacite.org [2] http://www.escidoc.org , eSciDoc, FIZ Karlruhe, Germany [3] http://panmetadocs.sf.net , panMetaDocs, GFZ Potsdam, Germany [4] http://metaworks.pangaea.de , panMetaWorks, Dr. R. Huber, MARUM, Univ. Bremen, Germany

  8. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  9. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  10. Oxidation Study of an Ultra High Temperature Ceramic Coatings Based on HfSiCN

    NASA Technical Reports Server (NTRS)

    Sacksteder, Dagny; Waters, Deborah L.; Zhu, Dongming

    2018-01-01

    High temperature fiber-reinforced ceramic matrix composites (CMCs) are important for aerospace applications because of their low density, high strength, and significantly higher-temperature capabilities compared to conventional metallic systems. The use of the SiCf/SiC and Cf/SiC CMCs allows the design of lighter-weight, more fuel efficient aircraft engines and also more advanced spacecraft airframe thermal protection systems. However, CMCs have to be protected with advanced environmental barrier coatings when they are incorporated into components for the harsh environments such as in aircraft engine or spacecraft applications. In this study, high temperature oxidation kinetics of an advanced HfSiCN coating on Cf/SiC CMC substrates were investigated at 1300 C, 1400 C, and 1500 C by using thermogravimetric analysis (TGA). The coating oxidation reaction parabolic rate constant and activation energy were estimated from the experimental results. The oxidation reaction studies showed that the coatings formed the most stable, predominant HfSiO4-HfO2 scales at 1400 C. A peroxidation test at 1400 C then followed by subsequent oxidation tests at various temperatures also showed more adherent scales and slower scale growth because of reduced the initial transient oxidation stage and increased HfSiO4-HfO2 content in the scales formed on the HfSiCN coatings.

  11. Whole-gene CFTR sequencing combined with digital RT-PCR improves genetic diagnosis of cystic fibrosis.

    PubMed

    Straniero, Letizia; Soldà, Giulia; Costantino, Lucy; Seia, Manuela; Melotti, Paola; Colombo, Carla; Asselta, Rosanna; Duga, Stefano

    2016-12-01

    Despite extensive screening, 1-5% of cystic fibrosis (CF) patients lack a definite molecular diagnosis. Next-generation sequencing (NGS) is making affordable genetic testing based on the identification of variants in extended genomic regions. In this frame, we analyzed 23 CF patients and one carrier by whole-gene CFTR resequencing: 4 were previously characterized and served as controls; 17 were cases lacking a complete diagnosis after a full conventional CFTR screening; 3 were consecutive subjects referring to our centers, not previously submitted to any screening. We also included in the custom NGS design the coding portions of the SCNN1A, SCNN1B and SCNN1G genes, encoding the subunits of the sodium channel ENaC, which were found to be mutated in CF-like patients. Besides 2 novel SCNN1B missense mutations, we identified 22 previously-known CFTR mutations, including 2 large deletions (whose breakpoints were precisely mapped), and novel deep-intronic variants, whose role on splicing was excluded by ex-vivo analyses. Finally, for 2 patients, compound heterozygotes for a CFTR mutation and the intron-9c.1210-34TG [11-12] T 5 allele-known to be associated with decreased CFTR mRNA levels-the molecular diagnosis was implemented by measuring the residual level of wild-type transcript by digital reverse transcription polymerase chain reaction performed on RNA extracted from nasal brushing.

  12. Mucus-penetrating solid lipid nanoparticles for the treatment of cystic fibrosis: Proof of concept, challenges and pitfalls.

    PubMed

    Nafee, N; Forier, K; Braeckmans, K; Schneider, M

    2018-03-01

    Nanocarrier-mediated transmucosal drug delivery based on conventional mucoadhesive, muco-inert or mucus-penetrating nanoparticles (NPs) is a growing field especially in challenging diseases like cystic fibrosis (CF). Efficacy of such systems dictates profound investigation of particle-mucus interaction and factors governing the whole process. Although variable techniques studying particle diffusion in mucus have been introduced, standardized procedures are lacking. The study comprised different methods based on micro- and macro-displacement as well as colloidal stability and turbidimetric experiments. Artificial sputum medium (ASM), CF sputum and mucus-secreting cell line (Calu-3 air interface culture, AIC) were applied. Solid lipid nanoparticles (SLNs) coated with variable hydrophilic sheath (poloxamer, Tween 80 or PVA) represented the nanocarriers under investigation. Both micro-displacement studies based on single particle tracking and macro-displacement experiments based on 3D-time laps confocal imaging revealed faster diffusion of poloxamer- > Tween- > PVA-coated SLNs. Compared to ASM, CF sputum showed not only lower diffusion rates but also remarkable discrepancies in particle-mucus diffusion rate due to sputum heterogenicity. Meanwhile, in case of Calu-3 AIC, thickness of the mucosal layer as well as density of mucus network were key determinants in the diffusion process. The points emphasized in this study highlight the road towards in vivo relevant particle-mucus interaction research. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. 78 FR 67352 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ...-75-001. Applicants: Entergy Arkansas, Inc. Description: Metadata Correction--Sec. 1.01 Amendment to.... Description: Metadata Correction--Section 1.01 Amendment to be effective 12/31/9998. Filed Date: 10/25/13...: Entergy Louisiana, LLC. Description: Metadata Correction--Section 1.01 Amendment to be effective 12/31...

  14. Organizing Scientific Data Sets: Studying Similarities and Differences in Metadata and Subject Term Creation

    ERIC Educational Resources Information Center

    White, Hollie C.

    2012-01-01

    Background: According to Salo (2010), the metadata entered into repositories are "disorganized" and metadata schemes underlying repositories are "arcane". This creates a challenging repository environment in regards to personal information management (PIM) and knowledge organization systems (KOSs). This dissertation research is…

  15. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  16. EMERALD: A Flexible Framework for Managing Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2010-12-01

    The seismological community is challenged by the vast quantity of new broadband seismic data provided by large-scale seismic arrays such as EarthScope’s USArray. While this bonanza of new data enables transformative scientific studies of the Earth’s interior, it also illuminates limitations in the methods used to prepare and preprocess those data. At a recent seismic data processing focus group workshop, many participants expressed the need for better systems to minimize the time and tedium spent on data preparation in order to increase the efficiency of scientific research. Another challenge related to data from all large-scale transportable seismic experiments is that there currently exists no system for discovering and tracking changes in station metadata. This critical information, such as station location, sensor orientation, instrument response, and clock timing data, may change over the life of an experiment and/or be subject to post-experiment correction. Yet nearly all researchers utilize metadata acquired with the downloaded data, even though subsequent metadata updates might alter or invalidate results produced with older metadata. A third long-standing issue for the seismic community is the lack of easily exchangeable seismic processing codes. This problem stems directly from the storage of seismic data as individual time series files, and the history of each researcher developing his or her preferred data file naming convention and directory organization. Because most processing codes rely on the underlying data organization structure, such codes are not easily exchanged between investigators. To address these issues, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The goal of the EMERALD project is to provide seismic researchers with a unified, user-friendly, extensible system for managing seismic event data, thereby increasing the efficiency of scientific enquiry. EMERALD stores seismic data and metadata in a state-of-the-art open source relational database (PostgreSQL), and can, on a timed basis or on demand, download the most recent metadata, compare it with previously acquired values, and alert the user to changes. The backend relational database is capable of easily storing and managing many millions of records. The extensible, plug-in architecture of the EMERALD system allows any researcher to contribute new visualization and processing methods written in any of 12 programming languages, and a central Internet-enabled repository for such methods provides users with the opportunity to download, use, and modify new processing methods on demand. EMERALD includes data acquisition tools allowing direct importation of seismic data, and also imports data from a number of existing seismic file formats. Pre-processed clean sets of data can be exported as standard sac files with user-defined file naming and directory organization, for use with existing processing codes. The EMERALD system incorporates existing acquisition and processing tools, including SOD, TauP, GMT, and FISSURES/DHI, making much of the functionality of those tools available in a unified system with a user-friendly web browser interface. EMERALD is now in beta test. See emerald.asu.edu or contact john.d.west@asu.edu for more details.

  17. Toward uniform implementation of parametric map Digital Imaging and Communication in Medicine standard in multisite quantitative diffusion imaging studies.

    PubMed

    Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C

    2018-01-01

    This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.

  18. International Metadata Initiatives: Lessons in Bibliographic Control.

    ERIC Educational Resources Information Center

    Caplan, Priscilla

    This paper looks at a subset of metadata schemes, including the Text Encoding Initiative (TEI) header, the Encoded Archival Description (EAD), the Dublin Core Metadata Element Set (DCMES), and the Visual Resources Association (VRA) Core Categories for visual resources. It examines why they developed as they did, major point of difference from…

  19. 36 CFR 1235.48 - What documentation must agencies transfer with electronic records?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... digital geospatial data files can include metadata that conforms to the Federal Geographic Data Committee's Content Standards for Digital Geospatial Metadata, as specified in Executive Order 12906 of April... number (301) 837-2903 for digital photographs and metadata, or the National Archives and Records...

  20. 36 CFR 1235.48 - What documentation must agencies transfer with electronic records?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... digital geospatial data files can include metadata that conforms to the Federal Geographic Data Committee's Content Standards for Digital Geospatial Metadata, as specified in Executive Order 12906 of April... number (301) 837-2903 for digital photographs and metadata, or the National Archives and Records...

  1. Leveraging Metadata to Create Better Web Services

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2012-01-01

    Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…

  2. 36 CFR § 1235.48 - What documentation must agencies transfer with electronic records?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... digital geospatial data files can include metadata that conforms to the Federal Geographic Data Committee's Content Standards for Digital Geospatial Metadata, as specified in Executive Order 12906 of April... number (301) 837-2903 for digital photographs and metadata, or the National Archives and Records...

  3. 36 CFR 1235.48 - What documentation must agencies transfer with electronic records?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... digital geospatial data files can include metadata that conforms to the Federal Geographic Data Committee's Content Standards for Digital Geospatial Metadata, as specified in Executive Order 12906 of April... number (301) 837-2903 for digital photographs and metadata, or the National Archives and Records...

  4. 36 CFR 1235.48 - What documentation must agencies transfer with electronic records?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... digital geospatial data files can include metadata that conforms to the Federal Geographic Data Committee's Content Standards for Digital Geospatial Metadata, as specified in Executive Order 12906 of April... number (301) 837-2903 for digital photographs and metadata, or the National Archives and Records...

  5. Shared Geospatial Metadata Repository for Ontario University Libraries: Collaborative Approaches

    ERIC Educational Resources Information Center

    Forward, Erin; Leahey, Amber; Trimble, Leanne

    2015-01-01

    Successfully providing access to special collections of digital geospatial data in academic libraries relies upon complete and accurate metadata. Creating and maintaining metadata using specialized standards is a formidable challenge for libraries. The Ontario Council of University Libraries' Scholars GeoPortal project, which created a shared…

  6. 106-17 Telemetry Standards Metadata Configuration Chapter 23

    DTIC Science & Technology

    2017-07-01

    23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard

  7. Digital Initiatives and Metadata Use in Thailand

    ERIC Educational Resources Information Center

    SuKantarat, Wichada

    2008-01-01

    Purpose: This paper aims to provide information about various digital initiatives in libraries in Thailand and especially use of Dublin Core metadata in cataloguing digitized objects in academic and government digital databases. Design/methodology/approach: The author began researching metadata use in Thailand in 2003 and 2004 while on sabbatical…

  8. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    PubMed

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-06-24

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  9. A novel metadata management model to capture consent for record linkage in longitudinal research studies.

    PubMed

    McMahon, Christiana; Denaxas, Spiros

    2017-11-06

    Informed consent is an important feature of longitudinal research studies as it enables the linking of the baseline participant information with administrative data. The lack of standardized models to capture consent elements can lead to substantial challenges. A structured approach to capturing consent-related metadata can address these. a) Explore the state-of-the-art for recording consent; b) Identify key elements of consent required for record linkage; and c) Create and evaluate a novel metadata management model to capture consent-related metadata. The main methodological components of our work were: a) a systematic literature review and qualitative analysis of consent forms; b) the development and evaluation of a novel metadata model. We qualitatively analyzed 61 manuscripts and 30 consent forms. We extracted data elements related to obtaining consent for linkage. We created a novel metadata management model for consent and evaluated it by comparison with the existing standards and by iteratively applying it to case studies. The developed model can facilitate the standardized recording of consent for linkage in longitudinal research studies and enable the linkage of external participant data. Furthermore, it can provide a structured way of recording consent-related metadata and facilitate the harmonization and streamlining of processes.

  10. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  11. Evaluating and Improving Metadata for Data Use and Understanding

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2013-12-01

    The last several decades have seen an extraordinary increase in the number and breadth of environmental data available to the scientific community and the general public. These increases have focused the environmental data community on creating metadata for discovering data and on the creation and population of catalogs and portals for facilitating discovery. This focus is reflected in the fields required by commonly used metadata standards and has resulted in collections populated with metadata that meet, but don't go far beyond, minimal discovery requirements. Discovery is the first step towards addressing scientific questions using data. As more data are discovered and accessed, users need metadata that 1) automates use and integration of these data in tools and 2) facilitates understanding the data when it is compared to similar datasets or as internal variations are observed. When data discovery is the primary goal, it is important to create records for as many datasets as possible. The content of these records is controlled by minimum requirements, and evaluation is generally limited to testing for required fields and counting records. As the use and understanding needs become more important, more comprehensive evaluation tools are needed. An approach is described for evaluating existing metadata in the light of these new requirements and for improving the metadata to meet them.

  12. EUDAT B2FIND : A Cross-Discipline Metadata Service and Discovery Portal

    NASA Astrophysics Data System (ADS)

    Widmann, Heinrich; Thiemann, Hannes

    2016-04-01

    The European Data Infrastructure (EUDAT) project aims at a pan-European environment that supports a variety of multiple research communities and individuals to manage the rising tide of scientific data by advanced data management technologies. This led to the establishment of the community-driven Collaborative Data Infrastructure that implements common data services and storage resources to tackle the basic requirements and the specific challenges of international and interdisciplinary research data management. The metadata service B2FIND plays a central role in this context by providing a simple and user-friendly discovery portal to find research data collections stored in EUDAT data centers or in other repositories. For this we store the diverse metadata collected from heterogeneous sources in a comprehensive joint metadata catalogue and make them searchable in an open data portal. The implemented metadata ingestion workflow consists of three steps. First the metadata records - provided either by various research communities or via other EUDAT services - are harvested. Afterwards the raw metadata records are converted and mapped to unified key-value dictionaries as specified by the B2FIND schema. The semantic mapping of the non-uniform, community specific metadata to homogenous structured datasets is hereby the most subtle and challenging task. To assure and improve the quality of the metadata this mapping process is accompanied by • iterative and intense exchange with the community representatives, • usage of controlled vocabularies and community specific ontologies and • formal and semantic validation. Finally the mapped and checked records are uploaded as datasets to the catalogue, which is based on the open source data portal software CKAN. CKAN provides a rich RESTful JSON API and uses SOLR for dataset indexing that enables users to query and search in the catalogue. The homogenization of the community specific data models and vocabularies enables not only the unique presentation of these datasets as tables of field-value pairs but also the faceted, spatial and temporal search in the B2FIND metadata portal. Furthermore the service provides transparent access to the scientific data objects through the given references and identifiers in the metadata. B2FIND offers support for new communities interested in publishing their data within EUDAT. We present here the functionality and the features of the B2FIND service and give an outlook of further developments as interfaces to external libraries and use of Linked Data.

  13. Keeping Research Data from the Continental Deep Drilling Programme (KTB) Accessible and Taking First Steps Towards Digital Preservation

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Ulbricht, D.; Conze, R.

    2014-12-01

    The Continental Deep Drilling Programme (KTB) was a scientific drilling project from 1987 to 1995 near Windischeschenbach, Bavaria. The main super-deep borehole reached a depth of 9,101 meters into the Earth's continental crust. The project used the most current equipment for data capture and processing. After the end of the project key data were disseminated through the web portal of the International Continental Scientific Drilling Program (ICDP). The scientific reports were published as printed volumes. As similar projects have also experienced, it becomes increasingly difficult to maintain a data portal over a long time. Changes in software and underlying hardware make a migration of the entire system inevitable. Around 2009 the data presented on the ICDP web portal were migrated to the Scientific Drilling Database (SDDB) and published through DataCite using Digital Object Identifiers (DOI) as persistent identifiers. The SDDB portal used a relational database with a complex data model to store data and metadata. A PHP-based Content Management System with custom modifications made it possible to navigate and browse datasets using the metadata and then download datasets. The data repository software eSciDoc allows storing self-contained packages consistent with the OAIS reference model. Each package consists of binary data files and XML-metadata. Using a REST-API the packages can be stored in the eSciDoc repository and can be searched using the XML-metadata. During the last maintenance cycle of the SDDB the data and metadata were migrated into the eSciDoc repository. Discovery metadata was generated following the GCMD-DIF, ISO19115 and DataCite schemas. The eSciDoc repository allows to store an arbitrary number of XML-metadata records with each data object. In addition to descriptive metadata each data object may contain pointers to related materials, such as IGSN-metadata to link datasets to physical specimens, or identifiers of literature interpreting the data. Datasets are presented by XSLT-stylesheet transformation using the stored metadata. The presentation shows several migration cycles of data and metadata, which were driven by aging software systems. Currently the datasets reside as self-contained entities in a repository system that is ready for digital preservation.

  14. Effective use of metadata in the integration and analysis of multi-dimensional optical data

    NASA Astrophysics Data System (ADS)

    Pastorello, G. Z.; Gamon, J. A.

    2012-12-01

    Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.

  15. Seeking the Path to Metadata Nirvana

    NASA Astrophysics Data System (ADS)

    Graybeal, J.

    2008-12-01

    Scientists have always found reusing other scientists' data challenging. Computers did not fundamentally change the problem, but enabled more and larger instances of it. In fact, by removing human mediation and time delays from the data sharing process, computers emphasize the contextual information that must be exchanged in order to exchange and reuse data. This requirement for contextual information has two faces: "interoperability" when talking about systems, and "the metadata problem" when talking about data. As much as any single organization, the Marine Metadata Interoperability (MMI) project has been tagged with the mission "Solve the metadata problem." Of course, if that goal is achieved, then sustained, interoperable data systems for interdisciplinary observing networks can be easily built -- pesky metadata differences, like which protocol to use for data exchange, or what the data actually measures, will be a thing of the past. Alas, as you might imagine, there will always be complexities and incompatibilities that are not addressed, and data systems that are not interoperable, even within a science discipline. So should we throw up our hands and surrender to the inevitable? Not at all. Rather, we try to minimize metadata problems as much as we can. In this we increasingly progress, despite natural forces that pull in the other direction. Computer systems let us work with more complexity, build community knowledge and collaborations, and preserve and publish our progress and (dis-)agreements. Funding organizations, science communities, and technologists see the importance interoperable systems and metadata, and direct resources toward them. With the new approaches and resources, projects like IPY and MMI can simultaneously define, display, and promote effective strategies for sustainable, interoperable data systems. This presentation will outline the role metadata plays in durable interoperable data systems, for better or worse. It will describe times when "just choosing a standard" can work, and when it probably won't work. And it will point out signs that suggest a metadata storm is coming to your community project, and how you might avoid it. From these lessons we will seek a path to producing interoperable, interdisciplinary, metadata-enlightened environment observing systems.

  16. Theoretical insight into OH- and Cl-initiated oxidation of CF3OCH(CF3)2 and CF3OCF2CF2H & fate of CF3OC(X•)(CF3)2 and CF3OCF2CF2X• radicals (X=O, O2)

    PubMed Central

    Bai, Feng-Yang; Ma, Yuan; Lv, Shuang; Pan, Xiu-Mei; Jia, Xiu-Juan

    2017-01-01

    In this study, the mechanistic and kinetic analysis for reactions of CF3OCH(CF3)2 and CF3OCF2CF2H with OH radicals and Cl atoms have been performed at the CCSD(T)//B3LYP/6-311++G(d,p) level. Kinetic isotope effects for reactions CF3OCH(CF3)2/CF3OCD(CF3)2 and CF3OCF2CF2H/CF3OCF2CF2D with OH and Cl were estimated so as to provide the theoretical estimation for future laboratory investigation. All rate constants, computed by canonical variational transition state theory (CVT) with the small-curvature tunneling correction (SCT), are in reasonable agreement with the limited experimental data. Standard enthalpies of formation for the species were also calculated. Atmospheric lifetime and global warming potentials (GWPs) of the reaction species were estimated, the large lifetimes and GWPs show that the environmental impact of them cannot be ignored. The organic nitrates can be produced by the further oxidation of CF3OC(•)(CF3)2 and CF3OCF2CF2• in the presence of O2 and NO. The subsequent decomposition pathways of CF3OC(O•)(CF3)2 and CF3OCF2CF2O• radicals were studied in detail. The derived Arrhenius expressions for the rate coefficients over 230–350 K are: k T(1) = 5.00 × 10−24T3.57 exp(−849.73/T), k T(2) = 1.79 × 10−24T4.84 exp(−4262.65/T), kT(3) = 1.94 × 10−24 T4.18 exp(−884.26/T), and k T(4) = 9.44 × 10−28T5.25 exp(−913.45/T) cm3 molecule−1 s−1. PMID:28067283

  17. Theoretical insight into OH- and Cl-initiated oxidation of CF3OCH(CF3)2 and CF3OCF2CF2H & fate of CF3OC(X•)(CF3)2 and CF3OCF2CF2X• radicals (X=O, O2)

    NASA Astrophysics Data System (ADS)

    Bai, Feng-Yang; Ma, Yuan; Lv, Shuang; Pan, Xiu-Mei; Jia, Xiu-Juan

    2017-01-01

    In this study, the mechanistic and kinetic analysis for reactions of CF3OCH(CF3)2 and CF3OCF2CF2H with OH radicals and Cl atoms have been performed at the CCSD(T)//B3LYP/6-311++G(d,p) level. Kinetic isotope effects for reactions CF3OCH(CF3)2/CF3OCD(CF3)2 and CF3OCF2CF2H/CF3OCF2CF2D with OH and Cl were estimated so as to provide the theoretical estimation for future laboratory investigation. All rate constants, computed by canonical variational transition state theory (CVT) with the small-curvature tunneling correction (SCT), are in reasonable agreement with the limited experimental data. Standard enthalpies of formation for the species were also calculated. Atmospheric lifetime and global warming potentials (GWPs) of the reaction species were estimated, the large lifetimes and GWPs show that the environmental impact of them cannot be ignored. The organic nitrates can be produced by the further oxidation of CF3OC(•)(CF3)2 and CF3OCF2CF2• in the presence of O2 and NO. The subsequent decomposition pathways of CF3OC(O•)(CF3)2 and CF3OCF2CF2O• radicals were studied in detail. The derived Arrhenius expressions for the rate coefficients over 230-350 K are: k T(1) = 5.00 × 10-24T3.57 exp(-849.73/T), k T(2) = 1.79 × 10-24T4.84 exp(-4262.65/T), kT(3) = 1.94 × 10-24 T4.18 exp(-884.26/T), and k T(4) = 9.44 × 10-28T5.25 exp(-913.45/T) cm3 molecule-1 s-1.

  18. Atmospheric Chemistry of (CF3)2CHOCH3, (CF3)2CHOCHO, and CF3C(O)OCH3.

    PubMed

    Østerstrøm, Freja From; Wallington, Timothy J; Sulbaek Andersen, Mads P; Nielsen, Ole John

    2015-10-22

    Smog chambers with in situ FTIR detection were used to measure rate coefficients in 700 Torr of air and 296 ± 2 K of: k(Cl+(CF3)2CHOCH3) = (5.41 ± 1.63) × 10(-12), k(Cl+(CF3)2CHOCHO) = (9.44 ± 1.81) × 10(-15), k(Cl+CF3C(O)OCH3) = (6.28 ± 0.98) × 10(-14), k(OH+(CF3)2CHOCH3) = (1.86 ± 0.41) × 10(-13), and k(OH+(CF3)2CHOCHO) = (2.08 ± 0.63) × 10(-14) cm(3) molecule(-1) s(-1). The Cl atom initiated oxidation of (CF3)2CHOCH3 gives (CF3)2CHOCHO in a yield indistinguishable from 100%. The OH radical initiated oxidation of (CF3)2CHOCH3 gives the following products (molar yields): (CF3)2CHOCHO (76 ± 8)%, CF3C(O)OCH3 (16 ± 2)%, CF3C(O)CF3 (4 ± 1)%, and C(O)F2 (45 ± 5)%. The primary oxidation product (CF3)2CHOCHO reacts with Cl atoms to give secondary products (molar yields): CF3C(O)CF3 (67 ± 7)%, CF3C(O)OCHO (28 ± 3)%, and C(O)F2 (118 ± 12)%. CF3C(O)OCH3 reacts with Cl atoms to give: CF3C(O)OCHO (80 ± 8)% and C(O)F2 (6 ± 1)%. Atmospheric lifetimes of (CF3)2CHOCH3, (CF3)2CHOCHO, and CF3C(O)OCH3 were estimated to be 62 days, 1.5 years, and 220 days, respectively. The 100-year global warming potentials (GWPs) for (CF3)2CHOCH3, (CF3)2CHOCHO, and CF3C(O)OCH3 are estimated to be 6, 121, and 46, respectively. A comprehensive description of the atmospheric fate of (CF3)2CHOCH3 is presented.

  19. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    PubMed

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  20. System for Earth Sample Registration SESAR: Services for IGSN Registration and Sample Metadata Management

    NASA Astrophysics Data System (ADS)

    Chan, S.; Lehnert, K. A.; Coleman, R. J.

    2011-12-01

    SESAR, the System for Earth Sample Registration, is an online registry for physical samples collected for Earth and environmental studies. SESAR generates and administers the International Geo Sample Number IGSN, a unique identifier for samples that is dramatically advancing interoperability amongst information systems for sample-based data. SESAR was developed to provide the complete range of registry services, including definition of IGSN syntax and metadata profiles, registration and validation of name spaces requested by users, tools for users to submit and manage sample metadata, validation of submitted metadata, generation and validation of the unique identifiers, archiving of sample metadata, and public or private access to the sample metadata catalog. With the development of SESAR v3, we placed particular emphasis on creating enhanced tools that make metadata submission easier and more efficient for users, and that provide superior functionality for users to manage metadata of their samples in their private workspace MySESAR. For example, SESAR v3 includes a module where users can generate custom spreadsheet templates to enter metadata for their samples, then upload these templates online for sample registration. Once the content of the template is uploaded, it is displayed online in an editable grid format. Validation rules are executed in real-time on the grid data to ensure data integrity. Other new features of SESAR v3 include the capability to transfer ownership of samples to other SESAR users, the ability to upload and store images and other files in a sample metadata profile, and the tracking of changes to sample metadata profiles. In the next version of SESAR (v3.5), we will further improve the discovery, sharing, registration of samples. For example, we are developing a more comprehensive suite of web services that will allow discovery and registration access to SESAR from external systems. Both batch and individual registrations will be possible through web services. Based on valuable feedback from the user community, we will introduce enhancements that add greater flexibility to the system to accommodate the vast diversity of metadata that users want to store. Users will be able to create custom metadata fields and use these for the samples they register. Users will also be able to group samples into 'collections' to make retrieval for research projects or publications easier. An improved interface design will allow for better workflow transition and navigation throughout the application. In keeping up with the demands of a growing community, SESAR has also made process changes to ensure efficiency in system development. For example, we have implemented a release cycle to better track enhancements and fixes to the system, and an API library that facilitates reusability of code. Usage tracking, metrics and surveys capture information to guide the direction of future developments. A new set of administrative tools allows greater control of system management.

  1. 77 FR 33739 - Announcement of Requirements and Registration for “Health Data Platform Metadata Challenge”

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-07

    ... Information Technology. SUMMARY: As part of the HHS Open Government Plan, the HealthData.gov Platform (HDP) is... application of existing voluntary consensus standards for metadata common to all open government data, and... vocabulary recommendations for Linked Data publishers, defining cross domain semantic metadata of open...

  2. Inferring Metadata for a Semantic Web Peer-to-Peer Environment

    ERIC Educational Resources Information Center

    Brase, Jan; Painter, Mark

    2004-01-01

    Learning Objects Metadata (LOM) aims at describing educational resources in order to allow better reusability and retrieval. In this article we show how additional inference rules allows us to derive additional metadata from existing ones. Additionally, using these rules as integrity constraints helps us to define the constraints on LOM elements,…

  3. Characterization of Educational Resources in e-Learning Systems Using an Educational Metadata Profile

    ERIC Educational Resources Information Center

    Solomou, Georgia; Pierrakeas, Christos; Kameas, Achilles

    2015-01-01

    The ability to effectively administrate educational resources in terms of accessibility, reusability and interoperability lies in the adoption of an appropriate metadata schema, able of adequately describing them. A considerable number of different educational metadata schemas can be found in literature, with the IEEE LOM being the most widely…

  4. Manifestations of Metadata: From Alexandria to the Web--Old is New Again

    ERIC Educational Resources Information Center

    Kennedy, Patricia

    2008-01-01

    This paper is a discussion of the use of metadata, in its various manifestations, to access information. Information management standards are discussed. The connection between the ancient world and the modern world is highlighted. Individual perspectives are paramount in fulfilling information seeking. Metadata is interpreted and reflected upon in…

  5. To Teach or Not to Teach: The Ethics of Metadata

    ERIC Educational Resources Information Center

    Barnes, Cynthia; Cavaliere, Frank

    2009-01-01

    Metadata is information about computer-generated documents that is often inadvertently transmitted to others. The problems associated with metadata have become more acute over time as word processing and other popular programs have become more receptive to the concept of collaboration. As more people become involved in the preparation of…

  6. Document Classification in Support of Automated Metadata Extraction Form Heterogeneous Collections

    ERIC Educational Resources Information Center

    Flynn, Paul K.

    2014-01-01

    A number of federal agencies, universities, laboratories, and companies are placing their documents online and making them searchable via metadata fields such as author, title, and publishing organization. To enable this, every document in the collection must be catalogued using the metadata fields. Though time consuming, the task of identifying…

  7. Creating FGDC and NBII metadata with Metavist 2005.

    Treesearch

    David J. Rugg

    2004-01-01

    This report documents a computer program for creating metadata compliant with the Federal Geographic Data Committee (FGDC) 1998 metadata standard or the National Biological Information Infrastructure (NBII) 1999 Biological Data Profile for the FGDC standard. The software runs under the Microsoft Windows 2000 and XP operating systems, and requires the presence of...

  8. An Assistant for Loading Learning Object Metadata: An Ontology Based Approach

    ERIC Educational Resources Information Center

    Casali, Ana; Deco, Claudia; Romano, Agustín; Tomé, Guillermo

    2013-01-01

    In the last years, the development of different Repositories of Learning Objects has been increased. Users can retrieve these resources for reuse and personalization through searches in web repositories. The importance of high quality metadata is key for a successful retrieval. Learning Objects are described with metadata usually in the standard…

  9. iLOG: A Framework for Automatic Annotation of Learning Objects with Empirical Usage Metadata

    ERIC Educational Resources Information Center

    Miller, L. D.; Soh, Leen-Kiat; Samal, Ashok; Nugent, Gwen

    2012-01-01

    Learning objects (LOs) are digital or non-digital entities used for learning, education or training commonly stored in repositories searchable by their associated metadata. Unfortunately, based on the current standards, such metadata is often missing or incorrectly entered making search difficult or impossible. In this paper, we investigate…

  10. Prediction of Solar Eruptions Using Filament Metadata

    NASA Astrophysics Data System (ADS)

    Aggarwal, Ashna; Schanche, Nicole; Reeves, Katharine K.; Kempton, Dustin; Angryk, Rafal

    2018-05-01

    We perform a statistical analysis of erupting and non-erupting solar filaments to determine the properties related to the eruption potential. In order to perform this study, we correlate filament eruptions documented in the Heliophysics Event Knowledgebase (HEK) with HEK filaments that have been grouped together using a spatiotemporal tracking algorithm. The HEK provides metadata about each filament instance, including values for length, area, tilt, and chirality. We add additional metadata properties such as the distance from the nearest active region and the magnetic field decay index. We compare trends in the metadata from erupting and non-erupting filament tracks to discover which properties present signs of an eruption. We find that a change in filament length over time is the most important factor in discriminating between erupting and non-erupting filament tracks, with erupting tracks being more likely to have decreasing length. We attempt to find an ensemble of predictive filament metadata using a Random Forest Classifier approach, but find the probability of correctly predicting an eruption with the current metadata is only slightly better than chance.

  11. Representing Hydrologic Models as HydroShare Resources to Facilitate Model Sharing and Collaboration

    NASA Astrophysics Data System (ADS)

    Castronova, A. M.; Goodall, J. L.; Mbewe, P.

    2013-12-01

    The CUAHSI HydroShare project is a collaborative effort that aims to provide software for sharing data and models within the hydrologic science community. One of the early focuses of this work has been establishing metadata standards for describing models and model-related data as HydroShare resources. By leveraging this metadata definition, a prototype extension has been developed to create model resources that can be shared within the community using the HydroShare system. The extension uses a general model metadata definition to create resource objects, and was designed so that model-specific parsing routines can extract and populate metadata fields from model input and output files. The long term goal is to establish a library of supported models where, for each model, the system has the ability to extract key metadata fields automatically, thereby establishing standardized model metadata that will serve as the foundation for model sharing and collaboration within HydroShare. The Soil Water & Assessment Tool (SWAT) is used to demonstrate this concept through a case study application.

  12. Solid-phase Microextraction (SPME) with Stable Isotope Calibration for Measuring Bioavailability of Hydrophobic Organic Contaminants

    PubMed Central

    Cui, Xinyi; Bao, Lianjun; Gan, Jay

    2014-01-01

    Solid-phase microextraction (SPME) is a biomimetic tool ideally suited for measuring bioavailability of hydrophobic organic compounds (HOCs) in sediment and soil matrices. However, conventional SPME sampling requires the attainment of equilibrium between the fiber and sample matrix, which may take weeks or months, greatly limiting its applicability. In this study, we explored the preloading of polydimethylsiloxane fiber with stable isotope labeled analogs (SI-SPME) to circumvent the need for long sampling time, and evaluated the performance of SI-SPME against the conventional equilibrium SPME (Eq-SPME) using a range of sediments and conditions. Desorption of stable isotope-labeled analogs and absorption of PCB-52, PCB-153, bifenthrin and cis-permethrin were isotropic, validating the assumption for SI-SPME. Highly reproducible preloading was achieved using acetone-water (1:4, v/v) as the carrier. Compared to Eq-SPME that required weeks or even months, the fiber concentrations (Cf) under equilibrium could be reliably estimated by SI-SPME in 1 d under agitated conditions or 20 d under static conditions in spiked sediments. The Cf values predicted by SI-SPME were statistically identical to those determined by Eq-SPME. The SI-SPME method was further applied successfully to field sediments contaminated with PCB 52, PCB 153, and bifenthrin. The increasing availability of stable isotope labeled standards and mass spectrometry nowadays makes SI-SPME highly feasible, allowing the use of SPME under non-equilibrium conditions with much shorter or flexible sampling time. PMID:23930601

  13. XAFS Data Interchange: A single spectrum XAFS data file format.

    PubMed

    Ravel, B; Newville, M

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  14. XAFS Data Interchange: A single spectrum XAFS data file format

    NASA Astrophysics Data System (ADS)

    Ravel, B.; Newville, M.

    2016-05-01

    We propose a standard data format for the interchange of XAFS data. The XAFS Data Interchange (XDI) standard is meant to encapsulate a single spectrum of XAFS along with relevant metadata. XDI is a text-based format with a simple syntax which clearly delineates metadata from the data table in a way that is easily interpreted both by a computer and by a human. The metadata header is inspired by the format of an electronic mail header, representing metadata names and values as an associative array. The data table is represented as columns of numbers. This format can be imported as is into most existing XAFS data analysis, spreadsheet, or data visualization programs. Along with a specification and a dictionary of metadata types, we provide an application-programming interface written in C and bindings for programming dynamic languages.

  15. Metadata and network API aspects of a framework for storing and retrieving civil infrastructure monitoring data

    NASA Astrophysics Data System (ADS)

    Wong, John-Michael; Stojadinovic, Bozidar

    2005-05-01

    A framework has been defined for storing and retrieving civil infrastructure monitoring data over a network. The framework consists of two primary components: metadata and network communications. The metadata component provides the descriptions and data definitions necessary for cataloging and searching monitoring data. The communications component provides Java classes for remotely accessing the data. Packages of Enterprise JavaBeans and data handling utility classes are written to use the underlying metadata information to build real-time monitoring applications. The utility of the framework was evaluated using wireless accelerometers on a shaking table earthquake simulation test of a reinforced concrete bridge column. The NEESgrid data and metadata repository services were used as a backend storage implementation. A web interface was created to demonstrate the utility of the data model and provides an example health monitoring application.

  16. EnviroAtlas Tree Cover Configuration and Connectivity, Water Background Web Service

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The 1-meter resolution tree cover configuration and connectivity map categorizes tree cover into structural elements (e.g. core, edge, connector, etc.). Source imagery varies by community. For specific information about methods and accuracy of each community's tree cover configuration and connectivity classification, consult their individual metadata records: Austin, TX (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B29D2B039-905C-4825-B0B4-9315122D6A9F%7D); Cleveland, OH (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B03cd54e1-4328-402e-ba75-e198ea9fbdc7%7D); Des Moines, IA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B350A83E6-10A2-4D5D-97E6-F7F368D268BB%7D); Durham, NC (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BC337BA5F-8275-4BA8-9647-F63C443F317D%7D); Fresno, CA (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B84B98749-9C1C-4679-AE24-9B9C0998EBA5%7D); Green Bay, WI (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7B69E48A44-3D30-4E84-A764-38FBDCCAC3D0%7D); Memphis, TN (https://edg.epa.gov/metadata/catalog/search/resource/details.page?uuid=%7BB7313ADA-04F7-4D80-ABBA-77E753AAD002%7D); Milwaukee, WI (https://edg.epa.gov/metadata/catalog/search/resource/details.page?u

  17. OlyMPUS - The Ontology-based Metadata Portal for Unified Semantics

    NASA Astrophysics Data System (ADS)

    Huffer, E.; Gleason, J. L.

    2015-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support data consumers and data providers, enabling the latter to register their data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS leverages the semantics and reasoning capabilities of ODISEES to provide data producers with a semi-automated interface for producing the semantically rich metadata needed to support ODISEES' data discovery and access services. It integrates the ODISEES metadata search system with multiple NASA data delivery tools to enable data consumers to create customized data sets for download to their computers, or for NASA Advanced Supercomputing (NAS) facility registered users, directly to NAS storage resources for access by applications running on NAS supercomputers. A core function of NASA's Earth Science Division is research and analysis that uses the full spectrum of data products available in NASA archives. Scientists need to perform complex analyses that identify correlations and non-obvious relationships across all types of Earth System phenomena. Comprehensive analytics are hindered, however, by the fact that many Earth science data products are disparate and hard to synthesize. Variations in how data are collected, processed, gridded, and stored, create challenges for data interoperability and synthesis, which are exacerbated by the sheer volume of available data. Robust, semantically rich metadata can support tools for data discovery and facilitate machine-to-machine transactions with services such as data subsetting, regridding, and reformatting. Such capabilities are critical to enabling the research activities integral to NASA's strategic plans. However, as metadata requirements increase and competing standards emerge, metadata provisioning becomes increasingly burdensome to data producers. The OlyMPUS system helps data providers produce semantically rich metadata, making their data more accessible to data consumers, and helps data consumers quickly discover and download the right data for their research.

  18. GeneLab Analysis Working Group Kick-Off Meeting

    NASA Technical Reports Server (NTRS)

    Costes, Sylvain V.

    2018-01-01

    Goals to achieve for GeneLab AWG - GL vision - Review of GeneLab AWG charter Timeline and milestones for 2018 Logistics - Monthly Meeting - Workshop - Internship - ASGSR Introduction of team leads and goals of each group Introduction of all members Q/A Three-tier Client Strategy to Democratize Data Physiological changes, pathway enrichment, differential expression, normalization, processing metadata, reproducibility, Data federation/integration with heterogeneous bioinformatics external databases The GLDS currently serves over 100 omics investigations to the biomedical community via open access. In order to expand the scope of metadata record searches via the GLDS, we designed a metadata warehouse that collects and updates metadata records from external systems housing similar data. To demonstrate the capabilities of federated search and retrieval of these data, we imported metadata records from three open-access data systems into the GLDS metadata warehouse: NCBI's Gene Expression Omnibus (GEO), EBI's PRoteomics IDEntifications (PRIDE) repository, and the Metagenomics Analysis server (MG-RAST). Each of these systems defines metadata for omics data sets differently. One solution to bridge such differences is to employ a common object model (COM) to which each systems' representation of metadata can be mapped. Warehoused metadata records are then transformed at ETL to this single, common representation. Queries generated via the GLDS are then executed against the warehouse, and matching records are shown in the COM representation (Fig. 1). While this approach is relatively straightforward to implement, the volume of the data in the omics domain presents challenges in dealing with latency and currency of records. Furthermore, the lack of a coordinated has been federated data search for and retrieval of these kinds of data across other open-access systems, so that users are able to conduct biological meta-investigations using data from a variety of sources. Such meta-investigations are key to corroborating findings from many kinds of assays and translating them into systems biology knowledge and, eventually, therapeutics.

  19. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  20. Constructing a Cross-Domain Resource Inventory: Key Components and Results of the EarthCube CINERGI Project.

    NASA Astrophysics Data System (ADS)

    Zaslavsky, I.; Richard, S. M.; Malik, T.; Hsu, L.; Gupta, A.; Grethe, J. S.; Valentine, D. W., Jr.; Lehnert, K. A.; Bermudez, L. E.; Ozyurt, I. B.; Whitenack, T.; Schachne, A.; Giliarini, A.

    2015-12-01

    While many geoscience-related repositories and data discovery portals exist, finding information about available resources remains a pervasive problem, especially when searching across multiple domains and catalogs. Inconsistent and incomplete metadata descriptions, disparate access protocols and semantic differences across domains, and troves of unstructured or poorly structured information which is hard to discover and use are major hindrances toward discovery, while metadata compilation and curation remain manual and time-consuming. We report on methodology, main results and lessons learned from an ongoing effort to develop a geoscience-wide catalog of information resources, with consistent metadata descriptions, traceable provenance, and automated metadata enhancement. Developing such a catalog is the central goal of CINERGI (Community Inventory of EarthCube Resources for Geoscience Interoperability), an EarthCube building block project (earthcube.org/group/cinergi). The key novel technical contributions of the projects include: a) development of a metadata enhancement pipeline and a set of document enhancers to automatically improve various aspects of metadata descriptions, including keyword assignment and definition of spatial extents; b) Community Resource Viewers: online applications for crowdsourcing community resource registry development, curation and search, and channeling metadata to the unified CINERGI inventory, c) metadata provenance, validation and annotation services, d) user interfaces for advanced resource discovery; and e) geoscience-wide ontology and machine learning to support automated semantic tagging and faceted search across domains. We demonstrate these CINERGI components in three types of user scenarios: (1) improving existing metadata descriptions maintained by government and academic data facilities, (2) supporting work of several EarthCube Research Coordination Network projects in assembling information resources for their domains, and (3) enhancing the inventory and the underlying ontology to address several complicated data discovery use cases in hydrology, geochemistry, sedimentology, and critical zone science. Support from the US National Science Foundation under award ICER-1343816 is gratefully acknowledged.

  1. Evaluating and Evolving Metadata in Multiple Dialects

    NASA Technical Reports Server (NTRS)

    Kozimore, John; Habermann, Ted; Gordon, Sean; Powers, Lindsay

    2016-01-01

    Despite many long-term homogenization efforts, communities continue to develop focused metadata standards along with related recommendations and (typically) XML representations (aka dialects) for sharing metadata content. Different representations easily become obstacles to sharing information because each representation generally requires a set of tools and skills that are designed, built, and maintained specifically for that representation. In contrast, community recommendations are generally described, at least initially, at a more conceptual level and are more easily shared. For example, most communities agree that dataset titles should be included in metadata records although they write the titles in different ways.

  2. Metadata Evaluation and Improvement: Evolving Analysis and Reporting

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Kozimor, John; Gordon, Sean

    2017-01-01

    ESIP Community members create and manage a large collection of environmental datasets that span multiple decades, the entire globe, and many parts of the solar system. Metadata are critical for discovering, accessing, using and understanding these data effectively and ESIP community members have successfully created large collections of metadata describing these data. As part of the White House Big Earth Data Initiative (BEDI), ESDIS has developed a suite of tools for evaluating these metadata in native dialects with respect to recommendations from many organizations. We will describe those tools and demonstrate evolving techniques for sharing results with data providers.

  3. Metadata registry and management system based on ISO 11179 for cancer clinical trials information system

    PubMed Central

    Park, Yu Rang; Kim*, Ju Han

    2006-01-01

    Standardized management of data elements (DEs) for Case Report Form (CRF) is crucial in Clinical Trials Information System (CTIS). Traditional CTISs utilize organization-specific definitions and storage methods for Des and CRFs. We developed metadata-based DE management system for clinical trials, Clinical and Histopathological Metadata Registry (CHMR), using international standard for metadata registry (ISO 11179) for the management of cancer clinical trials information. CHMR was evaluated in cancer clinical trials with 1625 DEs extracted from the College of American Pathologists Cancer Protocols for 20 major cancers. PMID:17238675

  4. Metadata to Support Data Warehouse Evolution

    NASA Astrophysics Data System (ADS)

    Solodovnikova, Darja

    The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.

  5. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-10-10

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  6. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    PubMed Central

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  7. Stir bar sorptive extraction approaches with a home-made portable electric stirrer for the analysis of polycyclic aromatic hydrocarbon compounds in environmental water.

    PubMed

    Mao, Xiangju; Hu, Bin; He, Man; Fan, Wenying

    2012-10-19

    In this study, novel off/on-site stir bar sorptive extraction (SBSE) approaches with a home-made portable electric stirrer have been developed for the analysis of polycyclic aromatic hydrocarbon compounds (PAHs). In these approaches, a miniature battery-operated electric stirrer was employed to provide agitation of sample solutions instead of the commonly used large size magnetic stirrer powered by alternating current in conventional SBSE process, which could extend the SBSE technique from the conventional off-site analysis to the on-site sampling. The applicability of the designed off/on-site SBSE sampling approaches was evaluated by polydimethylsiloxane (PDMS) coating SBSE-high performance liquid chromatography-fluorescence detection (HPLC-FLD) analysis of six target PAHs in environmental water. The home-made portable electric stirrer is simple, easy-to-operate, user friendly, low cost, easy-to-be-commercialized, and can be processed in direct immersion SBSE, headspace sorptive extraction (HSSE) and continuous flow (CF)-SBSE modes. Since the stir bar was fixed onto the portable device by magnetic force, it is very convenient to install, remove and replace the stir bar, and the coating friction loss which occurred frequently in conventional SBSE process could be avoided. The parameters affecting the extraction of six target PAHs by the home-made portable SBSE sampling device with different sampling modes were studied. Under the optimum extraction conditions, good linearity was obtained by all of three SBSE extraction modes with correlation coefficient (R) higher than 0.9971. The limits of detection (LODs, S/N=3) were 0.05-3.41 ng L(-1) for direct immersion SBSE, 0.03-2.23 ng L(-1) for HSSE and 0.09-3.75 ng L(-1) for CF-SBSE, respectively. The proposed portable PDMS-SBSE-HPLC-FLD method was applied for the analysis of six target PAHs in East Lake water, and the analytical results obtained by on-site SBSE sampling were in good agreement with that obtained by off-site SBSE sampling. The accuracy of the developed method was evaluated by recovery test and the recoveries for the spiked sample were found to be in the range of 87.1-122.8% for off-site CF-SBSE, 88.8-114.3% for on-site sampling, and 87.7-123.6% for off-site SBSE, respectively. The developed method is one of the most sensitive methods for PAHs determination and the home-designed SBSE system is feasible for the field sampling. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. 75 FR 22693 - Airworthiness Directives; General Electric Company (GE) CF34-1A, CF34-3A, and CF34-3B Series...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-30

    ..., Aerospace Engineer, Engine Certification Office, FAA, Engine & Propeller Directorate, 12 New England... Directives; General Electric Company (GE) CF34-1A, CF34-3A, and CF34-3B Series Turbofan Engines; Correction... to GE CF34-1A, CF34-3A, and CF34-3B series turbofan engines. The docket number is incorrect in all...

  9. An Examination of the Adoption of Preservation Metadata in Cultural Heritage Institutions: An Exploratory Study Using Diffusion of Innovations Theory

    ERIC Educational Resources Information Center

    Alemneh, Daniel Gelaw

    2009-01-01

    Digital preservation is a significant challenge for cultural heritage institutions and other repositories of digital information resources. Recognizing the critical role of metadata in any successful digital preservation strategy, the Preservation Metadata Implementation Strategies (PREMIS) has been extremely influential on providing a "core" set…

  10. Inconsistencies between Academic E-Book Platforms: A Comparison of Metadata and Search Results

    ERIC Educational Resources Information Center

    Wiersma, Gabrielle; Tovstiadi, Esta

    2017-01-01

    This article presents the results of a study of academic e-books that compared the metadata and search results from major academic e-book platforms. The authors collected data and performed a series of test searches designed to produce the same result regardless of platform. Testing, however, revealed metadata-related errors and significant…

  11. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  12. Biofilm Filtrates of Pseudomonas aeruginosa Strains Isolated from Cystic Fibrosis Patients Inhibit Preformed Aspergillus fumigatus Biofilms via Apoptosis

    PubMed Central

    Shirazi, Fazal; Ferreira, Jose A. G.; Stevens, David A.; Clemons, Karl V.; Kontoyiannis, Dimitrios P.

    2016-01-01

    Pseudomonas aeruginosa (Pa) and Aspergillus fumigatus (Af) colonize cystic fibrosis (CF) patient airways. Pa culture filtrates inhibit Af biofilms, and Pa non-CF, mucoid (Muc-CF) and nonmucoid CF (NMuc-CF) isolates form an ascending inhibitory hierarchy. We hypothesized this activity is mediated through apoptosis induction. One Af and three Pa (non-CF, Muc-CF, NMuc-CF) reference isolates were studied. Af biofilm was formed in 96 well plates for 16 h ± Pa biofilm filtrates. After 24 h, apoptosis was characterized by viability dye DiBAc, reactive oxygen species (ROS) generation, mitochondrial membrane depolarization, DNA fragmentation and metacaspase activity. Muc-CF and NMuc-CF filtrates inhibited and damaged Af biofilm (p<0.0001). Intracellular ROS levels were elevated (p<0.001) in NMuc-CF-treated Af biofilms (3.7- fold) compared to treatment with filtrates from Muc-CF- (2.5- fold) or non-CF Pa (1.7- fold). Depolarization of mitochondrial potential was greater upon exposure to NMuc-CF (2.4-fold) compared to Muc-CF (1.8-fold) or non-CF (1.25-fold) (p<0.0001) filtrates. Exposure to filtrates resulted in more DNA fragmentation in Af biofilm, compared to control, mediated by metacaspase activation. In conclusion, filtrates from CF-Pa isolates were more inhibitory against Af biofilms than from non-CF. The apoptotic effect involves mitochondrial membrane damage associated with metacaspase activation. PMID:26930399

  13. Biofilm Filtrates of Pseudomonas aeruginosa Strains Isolated from Cystic Fibrosis Patients Inhibit Preformed Aspergillus fumigatus Biofilms via Apoptosis.

    PubMed

    Shirazi, Fazal; Ferreira, Jose A G; Stevens, David A; Clemons, Karl V; Kontoyiannis, Dimitrios P

    2016-01-01

    Pseudomonas aeruginosa (Pa) and Aspergillus fumigatus (Af) colonize cystic fibrosis (CF) patient airways. Pa culture filtrates inhibit Af biofilms, and Pa non-CF, mucoid (Muc-CF) and nonmucoid CF (NMuc-CF) isolates form an ascending inhibitory hierarchy. We hypothesized this activity is mediated through apoptosis induction. One Af and three Pa (non-CF, Muc-CF, NMuc-CF) reference isolates were studied. Af biofilm was formed in 96 well plates for 16 h ± Pa biofilm filtrates. After 24 h, apoptosis was characterized by viability dye DiBAc, reactive oxygen species (ROS) generation, mitochondrial membrane depolarization, DNA fragmentation and metacaspase activity. Muc-CF and NMuc-CF filtrates inhibited and damaged Af biofilm (p<0.0001). Intracellular ROS levels were elevated (p<0.001) in NMuc-CF-treated Af biofilms (3.7- fold) compared to treatment with filtrates from Muc-CF- (2.5- fold) or non-CF Pa (1.7- fold). Depolarization of mitochondrial potential was greater upon exposure to NMuc-CF (2.4-fold) compared to Muc-CF (1.8-fold) or non-CF (1.25-fold) (p<0.0001) filtrates. Exposure to filtrates resulted in more DNA fragmentation in Af biofilm, compared to control, mediated by metacaspase activation. In conclusion, filtrates from CF-Pa isolates were more inhibitory against Af biofilms than from non-CF. The apoptotic effect involves mitochondrial membrane damage associated with metacaspase activation.

  14. Improvements to the Ontology-based Metadata Portal for Unified Semantics (OlyMPUS)

    NASA Astrophysics Data System (ADS)

    Linsinbigler, M. A.; Gleason, J. L.; Huffer, E.

    2016-12-01

    The Ontology-based Metadata Portal for Unified Semantics (OlyMPUS), funded by the NASA Earth Science Technology Office Advanced Information Systems Technology program, is an end-to-end system designed to support Earth Science data consumers and data providers, enabling the latter to register data sets and provision them with the semantically rich metadata that drives the Ontology-Driven Interactive Search Environment for Earth Sciences (ODISEES). OlyMPUS complements the ODISEES' data discovery system with an intelligent tool to enable data producers to auto-generate semantically enhanced metadata and upload it to the metadata repository that drives ODISEES. Like ODISEES, the OlyMPUS metadata provisioning tool leverages robust semantics, a NoSQL database and query engine, an automated reasoning engine that performs first- and second-order deductive inferencing, and uses a controlled vocabulary to support data interoperability and automated analytics. The ODISEES data discovery portal leverages this metadata to provide a seamless data discovery and access experience for data consumers who are interested in comparing and contrasting the multiple Earth science data products available across NASA data centers. Olympus will support scientists' services and tools for performing complex analyses and identifying correlations and non-obvious relationships across all types of Earth System phenomena using the full spectrum of NASA Earth Science data available. By providing an intelligent discovery portal that supplies users - both human users and machines - with detailed information about data products, their contents and their structure, ODISEES will reduce the level of effort required to identify and prepare large volumes of data for analysis. This poster will explain how OlyMPUS leverages deductive reasoning and other technologies to create an integrated environment for generating and exploiting semantically rich metadata.

  15. Chapter 35: Describing Data and Data Collections in the VO

    NASA Astrophysics Data System (ADS)

    Kent, B. R.; Hanisch, R. J.; Williams, R. D.

    The list of numbers: 19.22, 17.23, 18.11, 16.98, and 15.11, is of little intrinsic interest without information about the context in which they appear. For instance, are these daily closing stock prices for your favorite investment, or are they hourly photometric measurements of an increasingly bright quasar? The information needed to define this context is called metadata. Metadata are data about data. Astronomers are familiar with metadata through the headers of FITS files and the names and units associated with columns in a table or database. In the VO, metadata describe the contents of tables, images, and spectra, as well as aggregate collections of data (archives, surveys) and computational services. Moreover, VO metadata are constructed according to rules that avoid ambiguity and make it clear whether, in the example above, the stock prices are in dollars or euros, or the photometry is Johnson V or Sloan g. Organization of data is important in any scientific discipline. Equally crucial are the descriptions of that data: the organization publishing the data, its creator or the person making it available, what instruments were used, units assigned to measurement, calibration status, and data quality assessment. The Virtual Observatory metadata scheme not only applies to datasets, but to resources as well, including data archive facilities, searchable web forms, and online analysis and display tools. Since the scientific output flowing from large datasets depends greatly on how well the data are described, it is important for users to understand the basics of the metadata scheme in order to locate the data that they want and use it correctly. Metadata are the key to data discovery and data and service interoperability in the Virtual Observatory.

  16. Sensor metadata blueprints and computer-aided editing for disciplined SensorML

    NASA Astrophysics Data System (ADS)

    Tagliolato, Paolo; Oggioni, Alessandro; Fugazza, Cristiano; Pepe, Monica; Carrara, Paola

    2016-04-01

    The need for continuous, accurate, and comprehensive environmental knowledge has led to an increase in sensor observation systems and networks. The Sensor Web Enablement (SWE) initiative has been promoted by the Open Geospatial Consortium (OGC) to foster interoperability among sensor systems. The provision of metadata according to the prescribed SensorML schema is a key component for achieving this and nevertheless availability of correct and exhaustive metadata cannot be taken for granted. On the one hand, it is awkward for users to provide sensor metadata because of the lack in user-oriented, dedicated tools. On the other, the specification of invariant information for a given sensor category or model (e.g., observed properties and units of measurement, manufacturer information, etc.), can be labor- and timeconsuming. Moreover, the provision of these details is error prone and subjective, i.e., may differ greatly across distinct descriptions for the same system. We provide a user-friendly, template-driven metadata authoring tool composed of a backend web service and an HTML5/javascript client. This results in a form-based user interface that conceals the high complexity of the underlying format. This tool also allows for plugging in external data sources providing authoritative definitions for the aforementioned invariant information. Leveraging these functionalities, we compiled a set of SensorML profiles, that is, sensor metadata blueprints allowing end users to focus only on the metadata items that are related to their specific deployment. The natural extension of this scenario is the involvement of end users and sensor manufacturers in the crowd-sourced evolution of this collection of prototypes. We describe the components and workflow of our framework for computer-aided management of sensor metadata.

  17. Towards a semantic medical Web: HealthCyberMap's tool for building an RDF metadata base of health information resources based on the Qualified Dublin Core Metadata Set.

    PubMed

    Boulos, Maged N; Roudsari, Abdul V; Carson, Ewart R

    2002-07-01

    HealthCyberMap (http://healthcybermap.semanticweb.org/) aims at mapping Internet health information resources in novel ways for enhanced retrieval and navigation. This is achieved by collecting appropriate resource metadata in an unambiguous form that preserves semantics. We modelled a qualified Dublin Core (DC) metadata set ontology with extra elements for resource quality and geographical provenance in Prot g -2000. A metadata collection form helps acquiring resource instance data within Prot g . The DC subject field is populated with UMLS terms directly imported from UMLS Knowledge Source Server using UMLS tab, a Prot g -2000 plug-in. The project is saved in RDFS/RDF. The ontology and associated form serve as a free tool for building and maintaining an RDF medical resource metadata base. The UMLS tab enables browsing and searching for concepts that best describe a resource, and importing them to DC subject fields. The resultant metadata base can be used with a search and inference engine, and have textual and/or visual navigation interface(s) applied to it, to ultimately build a medical Semantic Web portal. Different ways of exploiting Prot g -2000 RDF output are discussed. By making the context and semantics of resources, not merely their raw text and formatting, amenable to computer 'understanding,' we can build a Semantic Web that is more useful to humans than the current Web. This requires proper use of metadata and ontologies. Clinical codes can reliably describe the subjects of medical resources, establish the semantic relationships (as defined by underlying coding scheme) between related resources, and automate their topical categorisation.

  18. Case Studies of Ecological Integrative Information Systems: The Luquillo and Sevilleta Information Management Systems

    NASA Astrophysics Data System (ADS)

    San Gil, Inigo; White, Marshall; Melendez, Eda; Vanderbilt, Kristin

    The thirty-year-old United States Long Term Ecological Research Network has developed extensive metadata to document their scientific data. Standard and interoperable metadata is a core component of the data-driven analytical solutions developed by this research network Content management systems offer an affordable solution for rapid deployment of metadata centered information management systems. We developed a customized integrative metadata management system based on the Drupal content management system technology. Building on knowledge and experience with the Sevilleta and Luquillo Long Term Ecological Research sites, we successfully deployed the first two medium-scale customized prototypes. In this paper, we describe the vision behind our Drupal based information management instances, and list the features offered through these Drupal based systems. We also outline the plans to expand the information services offered through these metadata centered management systems. We will conclude with the growing list of participants deploying similar instances.

  19. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  20. Distributed metadata in a high performance computing environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less

Top