Sample records for form netcdf format

  1. Implementing Network Common Data Form (netCDF) for the 3DWF Model

    DTIC Science & Technology

    2016-02-01

    format. In addition, data extraction from netCDF-formatted Weather Research and Forecasting ( WRF ) model results necessary for the 3DWF model’s wind...Requirement for the 3DWF Model 1 3. Implementing netCDF to the 3DWF Model 2 3.1 Weather Research and Forecasting ( WRF ) domain and results 3 3.2...Extracting Variables from netCDF Formatted WRF Data File 5 3.3 Converting the 3DWF’s Results into netCDF 11 4. Conclusion 14 5. References 15 Appendix

  2. Users' Manual and Installation Guide for the EverVIEW Slice and Dice Tool (Version 1.0 Beta)

    USGS Publications Warehouse

    Roszell, Dustin; Conzelmann, Craig; Chimmula, Sumani; Chandrasekaran, Anuradha; Hunnicut, Christina

    2009-01-01

    Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need arose for additional tools to view and manipulate NetCDF datasets, specifically to create subsets of large NetCDF files. To address this need, we created the EverVIEW Slice and Dice Tool to allow users to create subsets of grid-based NetCDF files. The major functions of this tool are (1) to subset NetCDF files both spatially and temporally; (2) to view the NetCDF data in table form; and (3) to export filtered data to a comma-separated value file format.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, James

    Atmospheric Radiation Measurement (ARM) Program standard data format is NetCDF 3 (Network Common Data Form). The object of this tutorial is to provide a basic introduction to NetCDF with an emphasis on aspects of the ARM application of NetCDF. The goal is to provide basic instructions for reading and visualizing ARM NetCDF data with the expectation that these examples can then be applied to more complex applications.

  4. Visualizing NetCDF Files by Using the EverVIEW Data Viewer

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Over the past few years, modelers in South Florida have started using Network Common Data Form (NetCDF) as the standard data container format for storing hydrologic and ecologic modeling inputs and outputs. With its origins in the meteorological discipline, NetCDF was created by the Unidata Program Center at the University Corporation for Atmospheric Research, in conjunction with the National Aeronautics and Space Administration and other organizations. NetCDF is a portable, scalable, self-describing, binary file format optimized for storing array-based scientific data. Despite attributes which make NetCDF desirable to the modeling community, many natural resource managers have few desktop software packages which can consume NetCDF and unlock the valuable data contained within. The U.S. Geological Survey and the Joint Ecosystem Modeling group, an ecological modeling community of practice, are working to address this need with the EverVIEW Data Viewer. Available for several operating systems, this desktop software currently supports graphical displays of NetCDF data as spatial overlays on a three-dimensional globe and views of grid-cell values in tabular form. An included Open Geospatial Consortium compliant, Web-mapping service client and charting interface allows the user to view Web-available spatial data as additional map overlays and provides simple charting visualizations of NetCDF grid values.

  5. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  6. Filtering NetCDF Files by Using the EverVIEW Slice and Dice Tool

    USGS Publications Warehouse

    Conzelmann, Craig; Romañach, Stephanie S.

    2010-01-01

    Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. It was created to provide a common interface between applications and real-time meteorological and other scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need surfaced for additional tools to view and manipulate NetCDF datasets, specifically to filter the files by creating subsets of large NetCDF files. The U.S. Geological Survey (USGS) and the Joint Ecosystem Modeling (JEM) group are working to address these needs with applications like the EverVIEW Slice and Dice Tool, which allows users to filter grid-based NetCDF files, thus targeting those data most important to them. The major functions of this tool are as follows: (1) to create subsets of NetCDF files temporally, spatially, and by data value; (2) to view the NetCDF data in table form; and (3) to export the filtered data to a comma-separated value (CSV) file format. The USGS and JEM will continue to work with scientists and natural resource managers across The Everglades to solve complex restoration problems through technological advances.

  7. Public-domain-software solution to data-access problems for numerical modelers

    USGS Publications Warehouse

    Jenter, Harry; Signell, Richard

    1992-01-01

    Unidata's network Common Data Form, netCDF, provides users with an efficient set of software for scientific-data-storage, retrieval, and manipulation. The netCDF file format is machine-independent, direct-access, self-describing, and in the public domain, thereby alleviating many problems associated with accessing output from large hydrodynamic models. NetCDF has programming interfaces in both the Fortran and C computer language with an interface to C++ planned for release in the future. NetCDF also has an abstract data type that relieves users from understanding details of the binary file structure; data are written and retrieved by an intuitive, user-supplied name rather than by file position. Users are aided further by Unidata's inclusion of the Common Data Language, CDL, a printable text-equivalent of the contents of a netCDF file. Unidata provides numerous operators and utilities for processing netCDF files. In addition, a number of public-domain and proprietary netCDF utilities from other sources are available at this time or will be available later this year. The U.S. Geological Survey has produced and is producing a number of public-domain netCDF utilities.

  8. Collaborative Sharing of Multidimensional Space-time Data Using HydroShare

    NASA Astrophysics Data System (ADS)

    Gan, T.; Tarboton, D. G.; Horsburgh, J. S.; Dash, P. K.; Idaszak, R.; Yi, H.; Blanton, B.

    2015-12-01

    HydroShare is a collaborative environment being developed for sharing hydrological data and models. It includes capability to upload data in many formats as resources that can be shared. The HydroShare data model for resources uses a specific format for the representation of each type of data and specifies metadata common to all resource types as well as metadata unique to specific resource types. The Network Common Data Form (NetCDF) was chosen as the format for multidimensional space-time data in HydroShare. NetCDF is widely used in hydrological and other geoscience modeling because it contains self-describing metadata and supports the creation of array-oriented datasets that may include three spatial dimensions, a time dimension and other user defined dimensions. For example, NetCDF may be used to represent precipitation or surface air temperature fields that have two dimensions in space and one dimension in time. This presentation will illustrate how NetCDF files are used in HydroShare. When a NetCDF file is loaded into HydroShare, header information is extracted using the "ncdump" utility. Python functions developed for the Django web framework on which HydroShare is based, extract science metadata present in the NetCDF file, saving the user from having to enter it. Where the file follows Climate Forecast (CF) convention and Attribute Convention for Dataset Discovery (ACDD) standards, metadata is thus automatically populated. Users also have the ability to add metadata to the resource that may not have been present in the original NetCDF file. HydroShare's metadata editing functionality then writes this science metadata back into the NetCDF file to maintain consistency between the science metadata in HydroShare and the metadata in the NetCDF file. This further helps researchers easily add metadata information following the CF and ACDD conventions. Additional data inspection and subsetting functions were developed, taking advantage of Python and command line libraries for working with NetCDF files. We describe the design and implementation of these features and illustrate how NetCDF files from a modeling application may be curated in HydroShare and thus enhance reproducibility of the associated research. We also discuss future development planned for multidimensional space-time data in HydroShare.

  9. NetCDF4/HDF5 and Linked Data in the Real World - Enriching Geoscientific Metadata without Bloat

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Car, Nicholas; Druken, Kelsey; Poudjom-Djomani, Yvette; Butcher, Stirling; Evans, Ben; Wyborn, Lesley

    2017-04-01

    NetCDF4 has become the dominant generic format for many forms of geoscientific data, leveraging (and constraining) the versatile HDF5 container format, while providing metadata conventions for interoperability. However, the encapsulation of detailed metadata within each file can lead to metadata "bloat", and difficulty in maintaining consistency where metadata is replicated to multiple locations. Complex conceptual relationships are also difficult to represent in simple key-value netCDF metadata. Linked Data provides a practical mechanism to address these issues by associating the netCDF files and their internal variables with complex metadata stored in Semantic Web vocabularies and ontologies, while complying with and complementing existing metadata conventions. One of the stated objectives of the netCDF4/HDF5 formats is that they should be self-describing: containing metadata sufficient for cataloguing and using the data. However, this objective can be regarded as only partially-met where details of conventions and definitions are maintained externally to the data files. For example, one of the most widely used netCDF community standards, the Climate and Forecasting (CF) Metadata Convention, maintains standard vocabularies for a broad range of disciplines across the geosciences, but this metadata is currently neither readily discoverable nor machine-readable. We have previously implemented useful Linked Data and netCDF tooling (ncskos) that associates netCDF files, and individual variables within those files, with concepts in vocabularies formulated using the Simple Knowledge Organization System (SKOS) ontology. NetCDF files contain Uniform Resource Identifier (URI) links to terms represented as SKOS Concepts, rather than plain-text representations of those terms, so we can use simple, standardised web queries to collect and use rich metadata for the terms from any Linked Data-presented SKOS vocabulary. Geoscience Australia (GA) manages a large volume of diverse geoscientific data, much of which is being translated from proprietary formats to netCDF at NCI Australia. This data is made available through the NCI National Environmental Research Data Interoperability Platform (NERDIP) for programmatic access and interdisciplinary analysis. The netCDF files contain both scientific data variables (e.g. gravity, magnetic or radiometric values), but also domain-specific operational values (e.g. specific instrument parameters) best described fully in formal vocabularies. Our ncskos codebase provides access to multiple stores of detailed external metadata in a standardised fashion. Geophysical datasets are generated from a "survey" event, and GA maintains corporate databases of all surveys and their associated metadata. It is impractical to replicate the full source survey metadata into each netCDF dataset so, instead, we link the netCDF files to survey metadata using public Linked Data URIs. These URIs link to Survey class objects which we model as a subclass of Activity objects as defined by the PROV Ontology, and we provide URI resolution for them via a custom Linked Data API which draws current survey metadata from GA's in-house databases. We have demonstrated that Linked Data is a practical way to associate netCDF data with detailed, external metadata. This allows us to ensure that catalogued metadata is kept consistent with metadata points-of-truth, and we can infer complex conceptual relationships not possible with netCDF key-value attributes alone.

  10. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    USGS Publications Warehouse

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  11. Linking netCDF Data with the Semantic Web - Enhancing Data Discovery Across Domains

    NASA Astrophysics Data System (ADS)

    Biard, J. C.; Yu, J.; Hedley, M.; Cox, S. J. D.; Leadbetter, A.; Car, N. J.; Druken, K. A.; Nativi, S.; Davis, E.

    2016-12-01

    Geophysical data communities are publishing large quantities of data across a wide variety of scientific domains which are overlapping more and more. Whilst netCDF is a common format for many of these communities, it is only one of a large number of data storage and transfer formats. One of the major challenges ahead is finding ways to leverage these diverse data sets to advance our understanding of complex problems. We describe a methodology for incorporating Resource Description Framework (RDF) triples into netCDF files called netCDF-LD (netCDF Linked Data). NetCDF-LD explicitly connects the contents of netCDF files - both data and metadata, with external web-based resources, including vocabularies, standards definitions, and data collections, and through them, a whole host of related information. This approach also preserves and enhances the self describing essence of the netCDF format and its metadata, whilst addressing the challenge of integrating various conventions into files. We present a case study illustrating how reasoning over RDF graphs can empower researchers to discover datasets across domain boundaries.

  12. Challenges to Standardization: A Case Study Using Coastal and Deep-Ocean Water Level Data

    NASA Astrophysics Data System (ADS)

    Sweeney, A. D.; Stroker, K. J.; Mungov, G.; McLean, S. J.

    2015-12-01

    Sea levels recorded at coastal stations and inferred from deep-ocean pressure observations at the seafloor are submitted for archive in multiple data and metadata formats. These formats include two forms of schema-less XML and a custom binary format accompanied by metadata in a spreadsheet. The authors report on efforts to use existing standards to make this data more discoverable and more useful beyond their initial use in detecting tsunamis. An initial review of data formats for sea level data around the globe revealed heterogeneity in presentation and content. In the absence of a widely-used domain-specific format, we adopted the general model for structuring data and metadata expressed by the Network Common Data Form (netCDF). netCDF has been endorsed by the Open Geospatial Consortium and has the advantages of small size when compared to equivalent plain text representation and provides a standard way of embedding metadata in the same file. We followed the orthogonal time-series profile of the Climate and Forecast discrete sampling geometries as the convention for structuring the data and describing metadata relevant for use. We adhered to the Attribute Convention for Data Discovery for capturing metadata to support user search. Beyond making it possible to structure data and metadata in a standard way, netCDF is supported by multiple software tools in providing programmatic cataloging, access, subsetting, and transformation to other formats. We will describe our successes and failures in adhering to existing standards and provide requirements for either augmenting existing conventions or developing new ones. Some of these enhancements are specific to sea level data, while others are applicable to time-series data in general.

  13. Serving Real-Time Point Observation Data in netCDF using Climate and Forecasting Discrete Sampling Geometry Conventions

    NASA Astrophysics Data System (ADS)

    Ward-Garrison, C.; May, R.; Davis, E.; Arms, S. C.

    2016-12-01

    NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. The Climate and Forecasting (CF) metadata conventions for netCDF foster the ability to work with netCDF files in general and useful ways. These conventions include metadata attributes for physical units, standard names, and spatial coordinate systems. While these conventions have been successful in easing the use of working with netCDF-formatted output from climate and forecast models, their use for point-based observation data has been less so. Unidata has prototyped using the discrete sampling geometry (DSG) CF conventions to serve, using the THREDDS Data Server, the real-time point observation data flowing across the Internet Data Distribution (IDD). These data originate in text format reports for individual stations (e.g. METAR surface data or TEMP upper air data) and are converted and stored in netCDF files in real-time. This work discusses the experiences and challenges of using the current CF DSG conventions for storing such real-time data. We also test how parts of netCDF's extended data model can address these challenges, in order to inform decisions for a future version of CF (CF 2.0) that would take advantage of features of the netCDF enhanced data model.

  14. A Prototype Web-based system for GOES-R Space Weather Data

    NASA Astrophysics Data System (ADS)

    Sundaravel, A.; Wilkinson, D. C.

    2010-12-01

    The Geostationary Operational Environmental Satellite-R Series (GOES-R) makes use of advanced instruments and technologies to monitor the Earth's surface and provide with accurate space weather data. The first GOES-R series satellite is scheduled to be launched in 2015. The data from the satellite will be widely used by scientists for space weather modeling and predictions. This project looks into the ways of how these datasets can be made available to the scientists on the Web and to assist them on their research. We are working on to develop a prototype web-based system that allows users to browse, search and download these data. The GOES-R datasets will be archived in NetCDF (Network Common Data Form) and CSV (Comma Separated Values) format. The NetCDF is a self-describing data format that contains both the metadata information and the data. The data is stored in an array-oriented fashion. The web-based system will offer services in two ways: via a web application (portal) and via web services. Using the web application, the users can download data in NetCDF or CSV format and can also plot a graph of the data. The web page displays the various categories of data and the time intervals for which the data is available. The web application (client) sends the user query to the server, which then connects to the data sources to retrieve the data and delivers it to the users. Data access will also be provided via SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) web services. These provide functions which can be used by other applications to fetch data and use the data for further processing. To build the prototype system, we are making use of proxy data from existing GOES and POES space weather datasets. Java is the programming language used in developing tools that formats data to NetCDF and CSV. For the web technology we have chosen Grails to develop both the web application and the services. Grails is an open source web application framework based on the Groovy language. We are also making use of the THREDDS (Thematic Realtime Environmental Distributed Data Services) server to publish and access the NetCDF files. We have completed developing software tools to generate NetCDF and CSV data files and also tools to translate NetCDF to CSV. The current phase of the project involves in designing and developing the web interface.

  15. Using GDAL to Convert NetCDF 4 CF 1.6 to GeoTIFF: Interoperability Problems and Solutions for Data Providers and Distributors

    NASA Astrophysics Data System (ADS)

    Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.

    2015-12-01

    An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability. Differencesbetween data file contents and software package expectations areexposed, affording an opportunity to improve conformance of software,data or both. The incident can also serve as a demonstration that dataproviders, distributors, and users can work together to improve dataproduct quality and interoperability.

  16. Rosetta: Ensuring the Preservation and Usability of ASCII-based Data into the Future

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Arms, S. C.

    2015-12-01

    Field data obtained from dataloggers often take the form of comma separated value (CSV) ASCII text files. While ASCII based data formats have positive aspects, such as the ease of accessing the data from disk and the wide variety of tools available for data analysis, there are some drawbacks, especially when viewing the situation through the lens of data interoperability and stewardship. The Unidata data translation tool, Rosetta, is a web-based service that provides an easy, wizard-based interface for data collectors to transform their datalogger generated ASCII output into Climate and Forecast (CF) compliant netCDF files following the CF-1.6 discrete sampling geometries. These files are complete with metadata describing what data are contained in the file, the instruments used to collect the data, and other critical information that otherwise may be lost in one of many README files. The choice of the machine readable netCDF data format and data model, coupled with the CF conventions, ensures long-term preservation and interoperability, and that future users will have enough information to responsibly use the data. However, with the understanding that the observational community appreciates the ease of use of ASCII files, methods for transforming the netCDF back into a CSV or spreadsheet format are also built-in. One benefit of translating ASCII data into a machine readable format that follows open community-driven standards is that they are instantly able to take advantage of data services provided by the many open-source data server tools, such as the THREDDS Data Server (TDS). While Rosetta is currently a stand-alone service, this talk will also highlight efforts to couple Rosetta with the TDS, thus allowing self-publishing of thoroughly documented datasets by the data producers themselves.

  17. Developing a Hadoop-based Middleware for Handling Multi-dimensional NetCDF

    NASA Astrophysics Data System (ADS)

    Li, Z.; Yang, C. P.; Schnase, J. L.; Duffy, D.; Lee, T. J.

    2014-12-01

    Climate observations and model simulations are collecting and generating vast amounts of climate data, and these data are ever-increasing and being accumulated in a rapid speed. Effectively managing and analyzing these data are essential for climate change studies. Hadoop, a distributed storage and processing framework for large data sets, has attracted increasing attentions in dealing with the Big Data challenge. The maturity of Infrastructure as a Service (IaaS) of cloud computing further accelerates the adoption of Hadoop in solving Big Data problems. However, Hadoop is designed to process unstructured data such as texts, documents and web pages, and cannot effectively handle the scientific data format such as array-based NetCDF files and other binary data format. In this paper, we propose to build a Hadoop-based middleware for transparently handling big NetCDF data by 1) designing a distributed climate data storage mechanism based on POSIX-enabled parallel file system to enable parallel big data processing with MapReduce, as well as support data access by other systems; 2) modifying the Hadoop framework to transparently processing NetCDF data in parallel without sequencing or converting the data into other file formats, or loading them to HDFS; and 3) seamlessly integrating Hadoop, cloud computing and climate data in a highly scalable and fault-tolerance framework.

  18. Moving from HDF4 to HDF5/netCFD-4

    NASA Technical Reports Server (NTRS)

    Pourmal, Elena; Yang, Kent; Lee, Joe

    2017-01-01

    In this presentation, we will go over the major differences between two file formats and libraries, and will talk about the HDF5 features that users should consider when designing new products in HDF5netCDF4. We will also discuss the h4h5tools toolkit that can facilitate conversion of data in the existing HDF4 files to HDF5 and netCDF-4, and we will engage the participants in the discussion of how The HDF Group can help with the transition and adoption of HDF5 and netCDF-4.

  19. Hydratools, a MATLAB® based data processing package for Sontek Hydra data

    USGS Publications Warehouse

    Martini, M.; Lightsom, F.L.; Sherwood, C.R.; Xu, Jie; Lacy, J.R.; Ramsey, A.; Horwitz, R.

    2005-01-01

    The U.S. Geological Survey (USGS) has developed a set of MATLAB tools to process and convert data collected by Sontek Hydra instruments to netCDF, which is a format used by the USGS to process and archive oceanographic time-series data. The USGS makes high-resolution current measurements within 1.5 meters of the bottom. These data are used in combination with other instrument data from sediment transport studies to develop sediment transport models. Instrument manufacturers provide software which outputs unique binary data formats. Multiple data formats are cumbersome. The USGS solution is to translate data streams into a common data format: netCDF. The Hydratools toolbox is written to create netCDF format files following EPIC conventions, complete with embedded metadata. Data are accepted from both the ADV and the PCADP. The toolbox will detect and remove bad data, substitute other sources of heading and tilt measurements if necessary, apply ambiguity corrections, calculate statistics, return information about data quality, and organize metadata. Standardized processing and archiving makes these data more easily and routinely accessible locally and over the Internet. In addition, documentation of the techniques used in the toolbox provides a baseline reference for others utilizing the data.

  20. Situational Lightning Climatologies for Central Florida: Phase III

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III

    2008-01-01

    This report describes work done by the Applied Meteorology Unit (AMU) to add composite soundings to the Advanced Weather Interactive Processing System (AWIPS). This allows National Weather Service (NWS) forecasters to compare the current atmospheric state with climatology. In a previous phase, the AMU created composite soundings for four rawinsonde observation stations in Florida, for each of eight flow regimes. The composite soundings were delivered to the NWS Melbourne (MLB) office for display using the NSHARP software program. NWS MLB requested that the AMU make the composite soundings available for display in AWIPS. The AMU first created a procedure to customize AWIPS so composite soundings could be displayed. A unique four-character identifier was created for each of the 32 composite soundings. The AMU wrote a Tool Command Language/Tool Kit (TcVTk) software program to convert the composite soundings from NSHARP to Network Common Data Form (NetCDF) format. The NetCDF files were then displayable by AWIPS.

  1. NetCDF-U - Uncertainty conventions for netCDF datasets

    NASA Astrophysics Data System (ADS)

    Bigagli, Lorenzo; Nativi, Stefano; Domenico, Ben

    2013-04-01

    To facilitate the automated processing of uncertain data (e.g. uncertainty propagation in modeling applications), we have proposed a set of conventions for expressing uncertainty information within the netCDF data model and format: the NetCDF Uncertainty Conventions (NetCDF-U). From a theoretical perspective, it can be said that no dataset is a perfect representation of the reality it purports to represent. Inevitably, errors arise from the observation process, including the sensor system and subsequent processing, differences in scales of phenomena and the spatial support of the observation mechanism, lack of knowledge about the detailed conversion between the measured quantity and the target variable. This means that, in principle, all data should be treated as uncertain. The most natural representation of an uncertain quantity is in terms of random variables, with a probabilistic approach. However, it must be acknowledged that almost all existing data resources are not treated in this way. Most datasets come simply as a series of values, often without any uncertainty information. If uncertainty information is present, then it is typically within the metadata, as a data quality element. This is typically a global (dataset wide) representation of uncertainty, often derived through some form of validation process. Typically, it is a statistical measure of spread, for example the standard deviation of the residuals. The introduction of a mechanism by which such descriptions of uncertainty can be integrated into existing geospatial applications is considered a practical step towards a more accurate modeling of our uncertain understanding of any natural process. Given the generality and flexibility of the netCDF data model, conventions on naming, syntax, and semantics have been adopted by several communities of practice, as a means of improving data interoperability. Some of the existing conventions include provisions on uncertain elements and concepts, but, to our knowledge, no general convention on the encoding of uncertainty has been proposed, to date. In particular, the netCDF Climate and Forecast Conventions (NetCDF-CF), a de-facto standard for a large amount of data in Fluid Earth Sciences, mention the issue and provide limited support for uncertainty representation. NetCDF-U is designed to be fully compatible with NetCDF-CF, where possible adopting the same mechanisms (e.g. using the same attributes name with compatible semantics). The rationale for this is that a probabilistic description of scientific quantities is a crosscutting aspect, which may be modularized (note that a netCDF dataset may be compliant with more than one convention). The scope of NetCDF-U is to extend and qualify the netCDF classic data model (also known as netCDF3), to capture the uncertainty related to geospatial information encoded in that format. In the future, a netCDF4 approach for uncertainty encoding will be investigated. The NetCDF-U Conventions have the following rationale: • Compatibility with netCDF-CF Conventions 1.5. • Human-readability of conforming datasets structure. • Minimal difference between certain/agnostic and uncertain representations of data (e.g. with respect to dataset structure). NetCDF-U is based on a generic mechanism for annotating netCDF data variables with probability theory semantics. The Uncertainty Markup Language (UncertML) 2.0 is used as a controlled conceptual model and vocabulary for NetCDF-U annotations. The proposed mechanism anticipates a generalized support for semantic annotations in netCDF. NetCDF-U defines syntactical conventions for encoding samples, summary statistics, and distributions, along with mechanisms for expressing dependency relationships among variables. The conventions were accepted as an Open Geospatial Consortium (OGC) Discussion Paper (OGC 11-163); related discussions are conducted on a public forum hosted by the OGC. NetCDF-U may have implications for future work directed at communicating geospatial data provenance and uncertainty in contexts other than netCDF. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 248488.

  2. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  3. CMGTooL user's manual

    USGS Publications Warehouse

    Xu, Jingping; Lightsom, Fran; Noble, Marlene A.; Denham, Charles

    2002-01-01

    During the past several years, the sediment transport group in the Coastal and Marine Geology Program (CMGP) of the U. S. Geological Survey has made major revisions to its methodology of processing, analyzing, and maintaining the variety of oceanographic time-series data. First, CMGP completed the transition of the its oceanographic time-series database to a self-documenting NetCDF (Rew et al., 1997) data format. Second, CMGP’s oceanographic data variety and complexity have been greatly expanded from traditional 2-dimensional, single-point time-series measurements (e.g., Electro-magnetic current meters, transmissometers) to more advanced 3-dimensional and profiling time-series measurements due to many new acquisitions of modern instruments such as Acoustic Doppler Current Profiler (RDI, 1996), Acoustic Doppler Velocitimeter, Pulse-Coherence Acoustic Doppler Profiler (SonTek, 2001), Acoustic Bacscatter Sensor (Aquatec, 1001001001001001001). In order to accommodate the NetCDF format of data from the new instruments, a software package of processing, analyzing, and visualizing time-series oceanographic data was developed. It is named CMGTooL. The CMGTooL package contains two basic components: a user-friendly GUI for NetCDF file analysis, processing and manipulation; and a data analyzing program library. Most of the routines in the library are stand-alone programs suitable for batch processing. CMGTooL is written in MATLAB computing language (The Mathworks, 1997), therefore users must have MATLAB installed on their computer in order to use this software package. In addition, MATLAB’s Signal Processing Toolbox is also required by some CMGTooL’s routines. Like most MATLAB programs, all CMGTooL codes are compatible with different computing platforms including PC, MAC, and UNIX machines (Note: CMGTooL has been tested on different platforms that run MATLAB 5.2 (Release 10) or lower versions. Some of the commands related to MAC may not be compatible with later releases of MATLAB). The GUI and some of the library routines call low-level NetCDF file I/O, variable and attribute functions. These NetCDF exclusive functions are supported by a MATLAB toolbox named NetCDF, created by Dr. Charles Denham . This toolbox has to be installed in order to use the CMGTooL GUI. The CMGTooL GUI calls several routines that were initially developed by others. The authors would like to acknowledge the following scientists for their ideas and codes: Dr. Rich Signell (USGS), Dr. Chris Sherwood (USGS), and Dr. Bob Beardsley (WHOI). Many special terms that carry special meanings in either MATLAB or the NetCDF Toolbox are used in this manual. Users are encouraged to read the documents of MATLAB and NetCDF for references.

  4. MK3TOOLS & NetCDF - storing VLBI data in a machine independent array oriented data format

    NASA Astrophysics Data System (ADS)

    Hobiger, T.; Koyama, Y.; Kondo, T.

    2007-07-01

    In the beginning of 2002 the International VLBI Service (IVS) has agreed to introduce a Platform-independent VLBI exchange format (PIVEX) which permits the exchange of observational data and stimulates the research across different analysis groups. Unfortunately PIVEX has never been implemented and many analysis software packages are still depending on prior processing (e.g. ambiguity resolution and computation of ionosphere corrections) done by CALC/SOLVE. Thus MK3TOOLS which handles MK3 databases without CALC/SOLVE being installed has been developed. It uses the NetCDF format to store the data and since interfaces exist for a variety of programming languages (FORTRAN, C/C++, JAVA, Perl, Python) it can be easily incorporated in existing and upcoming analysis software packages.

  5. Sharing electronic structure and crystallographic data with ETSF_IO

    NASA Astrophysics Data System (ADS)

    Caliste, D.; Pouillon, Y.; Verstraete, M. J.; Olevano, V.; Gonze, X.

    2008-11-01

    We present a library of routines whose main goal is to read and write exchangeable files (NetCDF file format) storing electronic structure and crystallographic information. It is based on the specification agreed inside the European Theoretical Spectroscopy Facility (ETSF). Accordingly, this library is nicknamed ETSF_IO. The purpose of this article is to give both an overview of the ETSF_IO library and a closer look at its usage. ETSF_IO is designed to be robust and easy to use, close to Fortran read and write routines. To facilitate its adoption, a complete documentation of the input and output arguments of the routines is available in the package, as well as six tutorials explaining in detail various possible uses of the library routines. Catalogue identifier: AEBG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Gnu Lesser General Public License No. of lines in distributed program, including test data, etc.: 63 156 No. of bytes in distributed program, including test data, etc.: 363 390 Distribution format: tar.gz Programming language: Fortran 95 Computer: All systems with a Fortran95 compiler Operating system: All systems with a Fortran95 compiler Classification: 7.3, 8 External routines: NetCDF, http://www.unidata.ucar.edu/software/netcdf Nature of problem: Store and exchange electronic structure data and crystallographic data independently of the computational platform, language and generating software Solution method: Implement a library based both on NetCDF file format and an open specification (http://etsf.eu/index.php?page=standardization)

  6. Geospatial Analysis Tool Kit for Regional Climate Datasets (GATOR) : An Open-source Tool to Compute Climate Statistic GIS Layers from Argonne Climate Modeling Results

    DTIC Science & Technology

    2017-08-01

    This large repository of climate model results for North America (Wang and Kotamarthi 2013, 2014, 2015) is stored in Network Common Data Form (NetCDF...Network Common Data Form (NetCDF). UCAR/Unidata Program Center, Boulder, CO. Available at: http://www.unidata.ucar.edu/software/netcdf. Accessed on 6/20...emissions diverge from each other regarding fossil fuel use, technology, and other socioeconomic factors. As a result, the estimated emissions for each of

  7. Development of Web GIS for complex processing and visualization of climate geospatial datasets as an integral part of dedicated Virtual Research Environment

    NASA Astrophysics Data System (ADS)

    Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander

    2017-04-01

    For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.

  8. Investigating the feasibility of Visualising Complex Space Weather Data in a CAVE

    NASA Astrophysics Data System (ADS)

    Loughlin, S.; Habash Krause, L.

    2013-12-01

    The purpose of this study was to investigate the feasibility of visualising complex space weather data in a Cave Automatic Virtual Environment (CAVE). Space weather is increasingly causing disruptions on Earth, such as power outages and disrupting communication to satellites. We wanted to display this space weather data within the CAVE since the data from instruments, models and simulations are typically too complex to understand on their own, especially when they are of 7 dimensions. To accomplish this, I created a VTK to NetCDF converter. NetCDF is a science data format, which stores array oriented scientific data. The format is maintained by the University Corporation for Atmospheric Research, and is used extensively by the atmospheric and space communities.

  9. CfRadial - CF NetCDF for Radar and Lidar Data in Polar Coordinates.

    NASA Astrophysics Data System (ADS)

    Dixon, M. J.; Lee, W. C.; Michelson, D.; Curtis, M.

    2016-12-01

    Since 1990, NCAR has supported over 20 different data formats for radar and lidar data in polar coordinates. Researchers, students and operational users spend unnecessary time handling a multitude of unique formats. CfRadial grew out of the need to simplify the use of these data and thereby to improve efficiency in research and operations. CfRadial adopts the well-known NetCDF framework, along with the Climate and Forecasting (CF) conventions such that data and metadata are accurately represented. Mobile platforms are also supported. The first major release, CfRadial version 1.1, occurred in February 2011, followed by minor updates. CfRadial has been adopted by NCAR as well as other agencies in the US and the UK. CfRadial development was boosted in 2015 through a two-year NSF EarthCube grant to improve CF in general. Version 1.4 was agreed upon in May 2016, adding explicit support for quality control fields and spectra. In Europe and Australia, EUMETNET OPERA's HDF5-based ODIM_H5 standard has been rapidly embraced as the modern standard for exchanging weather radar data for operations. ODIM_H5 exploits data groups, hierarchies, and built-in compression, characteristics that have been added to NetCDF4. A meeting of the WMO Task Team on Weather Radar Data Exchange (TT-WRDE) was held at NCAR in Boulder in July 2016, with a goal of identifying a single global standard for radar and lidar data in polar coordinates. CfRadial and ODIM_H5 were considered alongside the older and more rigid table-driven WMO BUFR and GRIB2 formats. TT-WRDE recommended that CfRadial 1.4 be merged with the sweep-oriented structure of ODIM_H5, making use of NetCDF groups, to produce a single format that will encompass the best ideas of both formats. That has led to the emergence of the CfRadial 2.0 standard. This format should meet the objectives of both the NSF EarthCube CF 2.0 initiative and the WMO TT-WRDE. It has the added benefit of improving data exchange between operational and research users, making operational data more readily available to researchers, and research algorithms more accessible to operational agencies.

  10. Data Publishing and Sharing Via the THREDDS Data Repository

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Caron, J.; Davis, E.; Baltzer, T.

    2007-12-01

    The terms "Team Science" and "Networked Science" have been coined to describe a virtual organization of researchers tied via some intellectual challenge, but often located in different organizations and locations. A critical component to these endeavors is publishing and sharing of content, including scientific data. Imagine pointing your web browser to a web page that interactively lets you upload data and metadata to a repository residing on a remote server, which can then be accessed by others in a secure fasion via the web. While any content can be added to this repository, it is designed particularly for storing and sharing scientific data and metadata. Server support includes uploading of data files that can subsequently be subsetted, aggregrated, and served in NetCDF or other scientific data formats. Metadata can be associated with the data and interactively edited. The THREDDS Data Repository (TDR) is a server that provides client initiated, on demand, location transparent storage for data of any type that can then be served by the THREDDS Data Server (TDS). The TDR provides functionality to: * securely store and "own" data files and associated metadata * upload files via HTTP and gridftp * upload a collection of data as single file * modify and restructure repository contents * incorporate metadata provided by the user * generate additional metadata programmatically * edit individual metadata elements The TDR can exist separately from a TDS, serving content via HTTP. Also, it can work in conjunction with the TDS, which includes functionality to provide: * access to data in a variety of formats via -- OPeNDAP -- OGC Web Coverage Service (for gridded datasets) -- bulk HTTP file transfer * a NetCDF view of datasets in NetCDF, OPeNDAP, HDF-5, GRIB, and NEXRAD formats * serving of very large volume datasets, such as NEXRAD radar * aggregation into virtual datasets * subsetting via OPeNDAP and NetCDF Subsetting services This talk will discuss TDR/TDS capabilities as well as how users can install this software to create their own repositories.

  11. Informatic infrastructure for Climatological and Oceanographic data based on THREDDS technology in a Grid environment

    NASA Astrophysics Data System (ADS)

    Tronconi, C.; Forneris, V.; Santoleri, R.

    2009-04-01

    CNR-ISAC-GOS is responsible for the Mediterranean Sea satellite operational system in the framework of MOON Patnership. This Observing System acquires satellite data and produces Near Real Time, Delayed Time and Re-analysis of Ocean Colour and Sea Surface Temperature products covering the Mediterranean and the Black Seas and regional basins. In the framework of several projects (MERSEA, PRIMI, Adricosm Star, SeaDataNet, MyOcean, ECOOP), GOS is producing Climatological/Satellite datasets based on optimal interpolation and specific Regional algorithm for chlorophyll, updated in Near Real Time and in Delayed mode. GOS has built • an informatic infrastructure data repository and delivery based on THREDDS technology The datasets are generated in NETCDF format, compliant with both the CF convention and the international satellite-oceanographic specification, as prescribed by GHRSST (for SST). All data produced, are made available to the users through a THREDDS server catalog. • A LAS has been installed in order to exploit the potential of NETCDF data and the OPENDAP URL. It provides flexible access to geo-referenced scientific data • a Grid Environment based on Globus Technologies (GT4) connecting more than one Institute; in particular exploiting CNR and ESA clusters makes possible to reprocess 12 years of Chlorophyll data in less than one month.(estimated processing time on a single core PC: 9months). In the poster we will give an overview of: • the features of the THREDDS catalogs, pointing out the powerful characteristics of this new middleware that has replaced the "old" OPENDAP Server; • the importance of adopting a common format (as NETCDF) for data exchange; • the tools (e.g. LAS) connected with THREDDS and NETCDF format use. • the Grid infrastructure on ISAC We will present also specific basin-scale High Resolution products and Ultra High Resolution regional/coastal products available on these catalogs.

  12. IVS Working Group 4: VLBI Data Structures

    NASA Astrophysics Data System (ADS)

    Gipson, J.

    2012-12-01

    I present an overview of the "openDB format" for storing, archiving, and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including Linux, Windows, and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data. For example it allows you to easily change subsets of the data used in the analysis such as troposphere modeling, ionospheric calibration, editing, and ambiguity resolution. It also allows for extending the types of data used, e.g., source maps. I present a roadmap to transition to this new format. The new format can already be used by VieVS and by the global mode of solve. There are plans in work for other software packages to be able to use the new format.

  13. Unleashing Geophysics Data with Modern Formats and Services

    NASA Astrophysics Data System (ADS)

    Ip, Alex; Brodie, Ross C.; Druken, Kelsey; Bastrakova, Irina; Evans, Ben; Kemp, Carina; Richardson, Murray; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    Geoscience Australia (GA) is the national steward of large volumes of geophysical data extending over the entire Australasian region and spanning many decades. The volume and variety of data which must be managed, coupled with the increasing need to support machine-to-machine data access, mean that the old "click-and-ship" model delivering data as downloadable files for local analysis is rapidly becoming unviable - a "big data" problem not unique to geophysics. The Australian Government, through the Research Data Services (RDS) Project, recently funded the Australian National Computational Infrastructure (NCI) to organize a wide range of Earth Systems data from diverse collections including geoscience, geophysics, environment, climate, weather, and water resources onto a single High Performance Data (HPD) Node. This platform, which now contains over 10 petabytes of data, is called the National Environmental Research Data Interoperability Platform (NERDIP), and is designed to facilitate broad user access, maximise reuse, and enable integration. GA has contributed several hundred terabytes of geophysical data to the NERDIP. Historically, geophysical datasets have been stored in a range of formats, with metadata of varying quality and accessibility, and without standardised vocabularies. This has made it extremely difficult to aggregate original data from multiple surveys (particularly un-gridded geophysics point/line data) into standard formats suited to High Performance Computing (HPC) environments. To address this, it was decided to use the NERDIP-preferred Hierarchical Data Format (HDF) 5, which is a proven, standard, open, self-describing and high-performance format supported by extensive software tools, libraries and data services. The Network Common Data Form (NetCDF) 4 API facilitates the use of data in HDF5, whilst the NetCDF Climate & Forecasting conventions (NetCDF-CF) further constrain NetCDF4/HDF5 data so as to provide greater inherent interoperability. The first geophysical data collection selected for transformation by GA was Airborne ElectroMagnetics (AEM) data which was held in proprietary-format files, with associated ISO 19115 metadata held in a separate relational database. Existing NetCDF-CF metadata profiles were enhanced to cover AEM and other geophysical data types, and work is underway to formalise the new geophysics vocabulary as a proposed extension to the Climate & Forecasting conventions. The richness and flexibility of HDF5's internal indexing mechanisms has allowed lossless restructuring of the AEM data for efficient storage, subsetting and access via either the NetCDF4/HDF5 APIs or Open-source Project for a Network Data Access Protocol (OPeNDAP) data services. This approach not only supports large-scale HPC processing, but also interactive access to a wide range of geophysical data in user-friendly environments such as iPython notebooks and more sophisticated cloud-enabled portals such as the Virtual Geophysics Laboratory (VGL). As multidimensional AEM datasets are relatively complex compared to other geophysical data types, the general approach employed in this project for modernizing AEM data is likely to be applicable to other geophysics data types. When combined with the use of standards-based data services and APIs, a coordinated, systematic modernisation will result in vastly improved accessibility to, and usability of, geophysical data in a wide range of computational environments both within and beyond the geophysics community.

  14. Satellite Level 3 & 4 Data Subsetting at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Huwe, Paul; Su, Jian; Loeser, Carlee; Ostrenga, Dana; Rui, Hualan; Vollmer, Bruce

    2017-01-01

    Earth Science data are available in many file formats (NetCDF, HDF, GRB, etc.) and in a wide range of sizes, from kilobytes to gigabytes. These properties have become a challenge to users if they are not familiar with these formats or only want a small region of interest (ROI) from a specific dataset. At NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), we have developed and implemented a multipurpose subset service to ease user access to Earth Science data. Our Level 3 & 4 Regridder is capable of subsetting across multiple parameters (spatially, temporally, by level, and by variable) as well as having additional beneficial features (temporal means, regridding to target grids, and file conversion to other data formats). In this presentation, we will demonstrate how users can use this service to better access only the data they need in the form they require.

  15. Satellite Level 3 & 4 Data Subsetting at NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Huwe, P.; Su, J.; Loeser, C. F.; Ostrenga, D.; Rui, H.; Vollmer, B.

    2017-12-01

    Earth Science data are available in many file formats (NetCDF, HDF, GRB, etc.) and in a wide range of sizes, from kilobytes to gigabytes. These properties have become a challenge to users if they are not familiar with these formats or only want a small region of interest (ROI) from a specific dataset. At NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), we have developed and implemented a multipurpose subset service to ease user access to Earth Science data. Our Level 3 & 4 Regridder is capable of subsetting across multiple parameters (spatially, temporally, by level, and by variable) as well as having additional beneficial features (temporal means, regridding to target grids, and file conversion to other data formats). In this presentation, we will demonstrate how users can use this service to better access only the data they need in the form they require.

  16. Which products are available for subsetting?

    Atmospheric Science Data Center

    2014-12-08

    ... users to create smaller files (subsets) of the original data by selecting desired parameters, parameter criterion, or latitude and ... fluxes, where the net flux is constrained to the global heat storage in netCDF format. Single Scanner Footprint TOA/Surface Fluxes ...

  17. A data model of the Climate and Forecast metadata conventions (CF-1.6) with a software implementation (cf-python v2.1)

    NASA Astrophysics Data System (ADS)

    Hassell, David; Gregory, Jonathan; Blower, Jon; Lawrence, Bryan N.; Taylor, Karl E.

    2017-12-01

    The CF (Climate and Forecast) metadata conventions are designed to promote the creation, processing, and sharing of climate and forecasting data using Network Common Data Form (netCDF) files and libraries. The CF conventions provide a description of the physical meaning of data and of their spatial and temporal properties, but they depend on the netCDF file encoding which can currently only be fully understood and interpreted by someone familiar with the rules and relationships specified in the conventions documentation. To aid in development of CF-compliant software and to capture with a minimal set of elements all of the information contained in the CF conventions, we propose a formal data model for CF which is independent of netCDF and describes all possible CF-compliant data. Because such data will often be analysed and visualised using software based on other data models, we compare our CF data model with the ISO 19123 coverage model, the Open Geospatial Consortium CF netCDF standard, and the Unidata Common Data Model. To demonstrate that this CF data model can in fact be implemented, we present cf-python, a Python software library that conforms to the model and can manipulate any CF-compliant dataset.

  18. Playing the Metadata Game: Technologies and Strategies Used by Climate Diagnostics Center for Cataloging and Distributing Climate Data.

    NASA Astrophysics Data System (ADS)

    Schweitzer, R. H.

    2001-05-01

    The Climate Diagnostics Center maintains a collection of gridded climate data primarily for use by local researchers. Because this data is available on fast digital storage and because it has been converted to netCDF using a standard metadata convention (called COARDS), we recognize that this data collection is also useful to the community at large. At CDC we try to use technology and metadata standards to reduce our costs associated with making these data available to the public. The World Wide Web has been an excellent technology platform for meeting that goal. Specifically we have developed Web-based user interfaces that allow users to search, plot and download subsets from the data collection. We have also been exploring use of the Pacific Marine Environment Laboratory's Live Access Server (LAS) as an engine for this task. This would result in further savings by allowing us to concentrate on customizing the LAS where needed, rather that developing and maintaining our own system. One such customization currently under development is the use of Java Servlets and JavaServer pages in conjunction with a metadata database to produce a hierarchical user interface to LAS. In addition to these Web-based user interfaces all of our data are available via the Distributed Oceanographic Data System (DODS). This allows other sites using LAS and individuals using DODS-enabled clients to use our data as if it were a local file. All of these technology systems are driven by metadata. When we began to create netCDF files, we collaborated with several other agencies to develop a netCDF convention (COARDS) for metadata. At CDC we have extended that convention to incorporate additional metadata elements to make the netCDF files as self-describing as possible. Part of the local metadata is a set of controlled names for the variable, level in the atmosphere and ocean, statistic and data set for each netCDF file. To allow searching and easy reorganization of these metadata, we loaded the metadata from the netCDF files into a mySQL database. The combination of the mySQL database and the controlled names makes it possible to automate the construction of user interfaces and standard format metadata descriptions, like Federal Geographic Data Committee (FGDC) and Directory Interchange Format (DIF). These standard descriptions also include an association between our controlled names and standard keywords such as those developed by the Global Change Master Directory (GCMD). This talk will give an overview of each of these technology and metadata standards as it applies to work at the Climate Diagnostics Center. The talk will also discuss the pros and cons of each approach and discuss areas for future development.

  19. OpenClimateGIS - A Web Service Providing Climate Model Data in Commonly Used Geospatial Formats

    NASA Astrophysics Data System (ADS)

    Erickson, T. A.; Koziol, B. W.; Rood, R. B.

    2011-12-01

    The goal of the OpenClimateGIS project is to make climate model datasets readily available in commonly used, modern geospatial formats used by GIS software, browser-based mapping tools, and virtual globes.The climate modeling community typically stores climate data in multidimensional gridded formats capable of efficiently storing large volumes of data (such as netCDF, grib) while the geospatial community typically uses flexible vector and raster formats that are capable of storing small volumes of data (relative to the multidimensional gridded formats). OpenClimateGIS seeks to address this difference in data formats by clipping climate data to user-specified vector geometries (i.e. areas of interest) and translating the gridded data on-the-fly into multiple vector formats. The OpenClimateGIS system does not store climate data archives locally, but rather works in conjunction with external climate archives that expose climate data via the OPeNDAP protocol. OpenClimateGIS provides a RESTful API web service for accessing climate data resources via HTTP, allowing a wide range of applications to access the climate data.The OpenClimateGIS system has been developed using open source development practices and the source code is publicly available. The project integrates libraries from several other open source projects (including Django, PostGIS, numpy, Shapely, and netcdf4-python).OpenClimateGIS development is supported by a grant from NOAA's Climate Program Office.

  20. Carolinas Coastal Change Processes Project data report for observations near Diamond Shoals, North Carolina, January-May 2009

    USGS Publications Warehouse

    Armstrong, Brandy N.; Warner, John C.; Voulgaris, George; List, Jeffrey H.; Thieler, E. Robert; Martini, Marinna A.; Montgomery, Ellyn T.

    2011-01-01

    This Open-File Report provides information collected for an oceanographic field study that occurred during January - May 2009 to investigate processes that control the sediment transport dynamics at Diamond Shoals, North Carolina. The objective of this report is to make the data available in digital form and to provide information to facilitate further analysis of the data. The report describes the background, experimental setup, equipment, and locations of the sensor deployments. The edited data are presented in time-series plots for rapid visualization of the data set, and in data files that are in the Network Common Data Format (netcdf). Supporting observational data are also included.

  1. Improving the Accessibility and Use of NASA Earth Science Data

    NASA Technical Reports Server (NTRS)

    Tisdale, Matthew; Tisdale, Brian

    2015-01-01

    Many of the NASA Langley Atmospheric Science Data Center (ASDC) Distributed Active Archive Center (DAAC) multidimensional tropospheric and atmospheric chemistry data products are stored in HDF4, HDF5 or NetCDF format, which traditionally have been difficult to analyze and visualize with geospatial tools. With the rising demand from the diverse end-user communities for geospatial tools to handle multidimensional products, several applications, such as ArcGIS, have refined their software. Many geospatial applications now have new functionalities that enable the end user to: Store, serve, and perform analysis on each individual variable, its time dimension, and vertical dimension. Use NetCDF, GRIB, and HDF raster data formats across applications directly. Publish output within REST image services or WMS for time and space enabled web application development. During this webinar, participants will learn how to leverage geospatial applications such as ArcGIS, OPeNDAP and ncWMS in the production of Earth science information, and in increasing data accessibility and usability.

  2. Global Ocean Currents Database

    NASA Astrophysics Data System (ADS)

    Boyer, T.; Sun, L.

    2016-02-01

    The NOAA's National Centers for Environmental Information has released an ocean currents database portal that aims 1) to integrate global ocean currents observations from a variety of instruments with different resolution, accuracy and response to spatial and temporal variability into a uniform network common data form (NetCDF) format and 2) to provide a dedicated online data discovery, access to NCEI-hosted and distributed data sources for ocean currents data. The portal provides a tailored web application that allows users to search for ocean currents data by platform types and spatial/temporal ranges of their interest. The dedicated web application is available at http://www.nodc.noaa.gov/gocd/index.html. The NetCDF format supports widely-used data access protocols and catalog services such as OPeNDAP (Open-source Project for a Network Data Access Protocol) and THREDDS (Thematic Real-time Environmental Distributed Data Services), which the GOCD users can use data files with their favorite analysis and visualization client software without downloading to their local machine. The potential users of the ocean currents database include, but are not limited to, 1) ocean modelers for their model skills assessments, 2) scientists and researchers for studying the impact of ocean circulations on the climate variability, 3) ocean shipping industry for safety navigation and finding optimal routes for ship fuel efficiency, 4) ocean resources managers while planning for the optimal sites for wastes and sewages dumping and for renewable hydro-kinematic energy, and 5) state and federal governments to provide historical (analyzed) ocean circulations as an aid for search and rescue

  3. Report on IVS-WG4

    NASA Astrophysics Data System (ADS)

    Gipson, John

    2011-07-01

    I describe the proposed data structure for storing, archiving and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including linux, Windows and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data, and also allows for extending the types of data used, e.g., source maps. I discuss the use of the new format in calc/solve and other VLBI analysis packages. I also discuss plans for transitioning to the new structure.

  4. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  5. ';Best' Practices for Aggregating Subset Results from Archived Datasets

    NASA Astrophysics Data System (ADS)

    Baskin, W. E.; Perez, J.

    2013-12-01

    In response to the exponential growth in science data analysis and visualization capabilities Data Centers have been developing new delivery mechanisms to package and deliver large volumes of aggregated subsets of archived data. New standards are evolving to help data providers and application programmers deal with growing needs of the science community. These standards evolve from the best practices gleaned from new products and capabilities. The NASA Atmospheric Sciences Data Center (ASDC) has developed and deployed production provider-specific search and subset web applications for the CALIPSO, CERES, TES, and MOPITT missions. This presentation explores several use cases that leverage aggregated subset results and examines the standards and formats ASDC developers applied to the delivered files as well as the implementation strategies for subsetting and processing the aggregated products. The following topics will be addressed: - Applications of NetCDF CF conventions to aggregated level 2 satellite subsets - Data-Provider-Specific format requirements vs. generalized standards - Organization of the file structure of aggregated NetCDF subset output - Global Attributes of individual subsetted files vs. aggregated results - Specific applications and framework used for subsetting and delivering derivative data files

  6. Climate Prediction Center - NCEP Global Ocean Data Assimilation System:

    Science.gov Websites

    home page National Weather Service NWS logo - Click to go to the NWS home page Climate Prediction Monthly in NetCDF Other formats Links NOAA Ocean Climate Observation Program (OCO) Climate Test Bed About Prediction (NCEP) are a valuable community asset for monitoring different aspects of ocean climate

  7. Software reuse example and challenges at NSIDC

    NASA Astrophysics Data System (ADS)

    Billingsley, B. W.; Brodzik, M.; Collins, J. A.

    2009-12-01

    NSIDC has created a new data discovery and access system, Searchlight, to provide users with the data they want in the format they want. NSIDC Searchlight supports discovery and access to disparate data types with on-the-fly reprojection, regridding and reformatting. Architected to both reuse open source systems and be reused itself, Searchlight reuses GDAL and Proj4 for manipulating data and format conversions, the netCDF Java library for creating netCDF output, MapServer and OpenLayers for defining spatial criteria and the JTS Topology Suite (JTS) in conjunction with Hibernate Spatial for database interaction and rich OGC-compliant spatial objects. The application reuses popular Java and Java Script libraries including Struts 2, Spring, JPA (Hibernate), Sitemesh, JFreeChart, JQuery, DOJO and a PostGIS PostgreSQL database. Future reuse of Searchlight components is supported at varying architecture levels, ranging from the database and model components to web services. We present the tools, libraries and programs that Searchlight has reused. We describe the architecture of Searchlight and explain the strategies deployed for reusing existing software and how Searchlight is built for reuse. We will discuss NSIDC reuse of the Searchlight components to support rapid development of new data delivery systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosler, Peter

    Stride Search provides a flexible tool for detecting storms or other extreme climate events in high-resolution climate data sets saved on uniform latitude-longitude grids in standard NetCDF format. Users provide the software a quantitative description of a meteorological event they are interested in; the software searches a data set for locations in space and time that meet the user’s description. In its first stage, Stride Search performs a spatial search of the data set at each timestep by dividing a search domain into circular sectors of constant geodesic radius. Data from a netCDF file is read into memory for eachmore » circular search sector. If the data meet or exceed a set of storm identification criteria (defined by the user), a storm is recorded to a linked list. Finally, the linked list is examined and duplicate detections of the same storm are removed and the results are written to an output file. The first stage’s output file is read by a second program that builds storm. Additional identification criteria may be applied at this stage to further classify storms. Storm tracks are the software’s ultimate output and routines are provided for formatting that output for various external software libraries for plotting and tabulating data.« less

  9. Web-based Altimeter Service

    NASA Astrophysics Data System (ADS)

    Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.

    2010-12-01

    We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The research described here was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

  10. The Value of Data and Metadata Standardization for Interoperability in Giovanni Or: Why Your Product's Metadata Causes Us Headaches!

    NASA Technical Reports Server (NTRS)

    Smit, Christine; Hegde, Mahabaleshwara; Strub, Richard; Bryant, Keith; Li, Angela; Petrenko, Maksym

    2017-01-01

    Giovanni is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization.

  11. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  12. An open source Java web application to build self-contained Web GIS sites

    NASA Astrophysics Data System (ADS)

    Zavala Romero, O.; Ahmed, A.; Chassignet, E.; Zavala-Hidalgo, J.

    2014-12-01

    This work describes OWGIS, an open source Java web application that creates Web GIS sites by automatically writing HTML and JavaScript code. OWGIS is configured by XML files that define which layers (geographic datasets) will be displayed on the websites. This project uses several Open Geospatial Consortium standards to request data from typical map servers, such as GeoServer, and is also able to request data from ncWMS servers. The latter allows for the displaying of 4D data stored using the NetCDF file format (widely used for storing environmental model datasets). Some of the features available on the sites built with OWGIS are: multiple languages, animations, vertical profiles and vertical transects, color palettes, color ranges, and the ability to download data. OWGIS main users are scientists, such as oceanographers or climate scientists, who store their data in NetCDF files and want to analyze, visualize, share, or compare their data using a website.

  13. Tool to assess contents of ARM surface meteorology network netCDF files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staudt, A.; Kwan, T.; Tichler, J.

    The Atmospheric Radiation Measurement (ARM) Program, supported by the US Department of Energy, is a major program of atmospheric measurement and modeling designed to improve the understanding of processes and properties that affect atmospheric radiation, with a particular focus on the influence of clouds and the role of cloud radiative feedback in the climate system. The ARM Program will use three highly instrumented primary measurement sites. Deployment of instrumentation at the first site, located in the Southern Great Plains of the United States, began in May of 1992. The first phase of deployment at the second site in the Tropicalmore » Western Pacific is scheduled for late in 1995. The third site will be in the North Slope of Alaska and adjacent Arctic Ocean. To meet the scientific objectives of ARM, observations from the ARM sites are combined with data from other sources; these are called external data. Among these external data sets are surface meteorological observations from the Oklahoma Mesonet, a Kansas automated weather network, the Wind Profiler Demonstration Network (WPDN), and the National Weather Service (NWS) surface stations. Before combining these data with the Surface Meteorological Observations Station (SMOS) ARM data, it was necessary to assess the contents and quality of both the ARM and the external data sets. Since these data sets had previously been converted to netCDF format for use by the ARM Science Team, a tool was written to assess the contents of the netCDF files.« less

  14. Web mapping system for complex processing and visualization of environmental geospatial datasets

    NASA Astrophysics Data System (ADS)

    Titov, Alexander; Gordov, Evgeny; Okladnikov, Igor

    2016-04-01

    Environmental geospatial datasets (meteorological observations, modeling and reanalysis results, etc.) are used in numerous research applications. Due to a number of objective reasons such as inherent heterogeneity of environmental datasets, big dataset volume, complexity of data models used, syntactic and semantic differences that complicate creation and use of unified terminology, the development of environmental geodata access, processing and visualization services as well as client applications turns out to be quite a sophisticated task. According to general INSPIRE requirements to data visualization geoportal web applications have to provide such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. It should be noted that modern web mapping systems as integrated geoportal applications are developed based on the SOA and might be considered as complexes of interconnected software tools for working with geospatial data. In the report a complex web mapping system including GIS web client and corresponding OGC services for working with geospatial (NetCDF, PostGIS) dataset archive is presented. There are three basic tiers of the GIS web client in it: 1. Tier of geospatial metadata retrieved from central MySQL repository and represented in JSON format 2. Tier of JavaScript objects implementing methods handling: --- NetCDF metadata --- Task XML object for configuring user calculations, input and output formats --- OGC WMS/WFS cartographical services 3. Graphical user interface (GUI) tier representing JavaScript objects realizing web application business logic Metadata tier consists of a number of JSON objects containing technical information describing geospatial datasets (such as spatio-temporal resolution, meteorological parameters, valid processing methods, etc). The middleware tier of JavaScript objects implementing methods for handling geospatial metadata, task XML object, and WMS/WFS cartographical services interconnects metadata and GUI tiers. The methods include such procedures as JSON metadata downloading and update, launching and tracking of the calculation task running on the remote servers as well as working with WMS/WFS cartographical services including: obtaining the list of available layers, visualizing layers on the map, exporting layers in graphical (PNG, JPG, GeoTIFF), vector (KML, GML, Shape) and digital (NetCDF) formats. Graphical user interface tier is based on the bundle of JavaScript libraries (OpenLayers, GeoExt and ExtJS) and represents a set of software components implementing web mapping application business logic (complex menus, toolbars, wizards, event handlers, etc.). GUI provides two basic capabilities for the end user: configuring the task XML object functionality and cartographical information visualizing. The web interface developed is similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Web mapping system developed has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical form. The work is supported by SB RAS Basic Program Projects VIII.80.2.1 and IV.38.1.7.

  15. The Hierarchical Data Format as a Foundation for Community Data Sharing

    NASA Astrophysics Data System (ADS)

    Habermann, T.

    2017-12-01

    Hierarchical Data Format (HDF) formats and libraries have been used by individual researchers and major science programs across many Earth and Space Science disciplines and sectors to provide high-performance information storage and access for several decades. Generic group, dataset, and attribute objects in HDF have been combined in many ways to form domain objects that scientists understand and use. Well-known applications of HDF in the Earth Sciences include thousands of global satellite observations and products produced by NASA's Earth Observing System using the HDF-EOS conventions, navigation quality bathymetry produced as Bathymetric Attributed Grids (BAGs) by the OpenNavigationSurface project and others, seismic wave collections written into the Adoptable Seismic Data Format (ASDF) and many oceanographic and atmospheric products produced using the climate-forecast conventions with the netCDF4 data model and API to HDF5. This is the modus operandi of these communities: 1) develop a model of scientific data objects and associated metadata used in a domain, 2) implement that model using HDF, 3) develop software libraries that connect that model to tools and 4) encourage adoption of those tools in the community. Understanding these domain object implementations and facilitating communication across communities is an important goal of The HDF Group. We will discuss these examples and approaches to community outreach during this session.

  16. CMEMS (Copernicus Marine Environment Monitoring Service) In Situ Thematic Assembly Centre: A service for operational Oceanography

    NASA Astrophysics Data System (ADS)

    Manzano Muñoz, Fernando; Pouliquen, Sylvie; Petit de la Villeon, Loic; Carval, Thierry; Loubrieu, Thomas; Wedhe, Henning; Sjur Ringheim, Lid; Hammarklint, Thomas; Tamm, Susanne; De Alfonso, Marta; Perivoliotis, Leonidas; Chalkiopoulos, Antonis; Marinova, Veselka; Tintore, Joaquin; Troupin, Charles

    2016-04-01

    Copernicus, previously known as GMES (Global Monitoring for Environment and Security), is the European Programme for the establishment of a European capacity for Earth Observation and Monitoring. Copernicus aims to provide a sustainable service for Ocean Monitoring and Forecasting validated and commissioned by users. From May 2015, the Copernicus Marine Environment Monitoring Service (CMEMS) is working on an operational mode through a contract with services engagement (result is regular data provision). Within CMEMS, the In Situ Thematic Assembly Centre (INSTAC) distributed service integrates in situ data from different sources for operational oceanography needs. CMEMS INSTAC is collecting and carrying out quality control in a homogeneous manner on data from providers outside Copernicus (national and international networks), to fit the needs of internal and external users. CMEMS INSTAC has been organized in 7 regional Dissemination Units (DUs) to rely on the EuroGOOS ROOSes. Each DU aggregates data and metadata provided by a series of Production Units (PUs) acting as an interface for providers. Homogeneity and standardization are key features to ensure coherent and efficient service. All DUs provide data in the OceanSITES NetCDF format 1.2 (based on NetCDF 3.6), which is CF compliant, relies on SeaDataNet vocabularies and is able to handle profile and time-series measurements. All the products, both near real-time (NRT) and multi-year (REP), are available online for every CMEMS registered user through an FTP service. On top of the FTP service, INSTAC products are available through Oceanotron, an open-source data server dedicated to marine observations dissemination. It provides services such as aggregation on spatio-temporal coordinates and observed parameters, and subsetting on observed parameters and metadata. The accuracy of the data is checked on various levels. Quality control procedures are applied for the validity of the data and correctness tests for the metadata of each NetCDF file. The quality control procedures for the data include different routines for NRT and REP products. Key Performance Indicators (KPI) for monitoring purposes are also used in Copernicus. They allow a periodic monitoring of the availability, quantity and quality of the INSTAC data integrated in the NRT products. Statistical reports are generated on quarterly and yearly basis to provide more visibility on the coverage in space and time of the INSTAC NRT and REP products, as well as information on their quality. These reports are generated using Java and Python procedures developed within the INSTAC group. One of the most critical tasks for the DUs is to generate NetCDF files compliant with the agreed format. Many tools and programming libraries have been developed for that purpose, for instance Unidata Java Library. These tools provide NetCDF data management capabilities including creation, reading and modification. Some DUs have also developed regional data portals which offer useful information for the users including data charts, platforms availability through interactive maps, KPI and statistical figures and direct access to the FTP service. The proposed presentation will detail Copernicus in situ data service and the monitoring tools that have been developed by the INSTAC group.

  17. Wave data processing toolbox manual

    USGS Publications Warehouse

    Sullivan, Charlene M.; Warner, John C.; Martini, Marinna A.; Lightsom, Frances S.; Voulgaris, George; Work, Paul

    2006-01-01

    Researchers routinely deploy oceanographic equipment in estuaries, coastal nearshore environments, and shelf settings. These deployments usually include tripod-mounted instruments to measure a suite of physical parameters such as currents, waves, and pressure. Instruments such as the RD Instruments Acoustic Doppler Current Profiler (ADCP(tm)), the Sontek Argonaut, and the Nortek Aquadopp(tm) Profiler (AP) can measure these parameters. The data from these instruments must be processed using proprietary software unique to each instrument to convert measurements to real physical values. These processed files are then available for dissemination and scientific evaluation. For example, the proprietary processing program used to process data from the RD Instruments ADCP for wave information is called WavesMon. Depending on the length of the deployment, WavesMon will typically produce thousands of processed data files. These files are difficult to archive and further analysis of the data becomes cumbersome. More imperative is that these files alone do not include sufficient information pertinent to that deployment (metadata), which could hinder future scientific interpretation. This open-file report describes a toolbox developed to compile, archive, and disseminate the processed wave measurement data from an RD Instruments ADCP, a Sontek Argonaut, or a Nortek AP. This toolbox will be referred to as the Wave Data Processing Toolbox. The Wave Data Processing Toolbox congregates the processed files output from the proprietary software into two NetCDF files: one file contains the statistics of the burst data and the other file contains the raw burst data (additional details described below). One important advantage of this toolbox is that it converts the data into NetCDF format. Data in NetCDF format is easy to disseminate, is portable to any computer platform, and is viewable with public-domain freely-available software. Another important advantage is that a metadata structure is embedded with the data to document pertinent information regarding the deployment and the parameters used to process the data. Using this format ensures that the relevant information about how the data was collected and converted to physical units is maintained with the actual data. EPIC-standard variable names have been utilized where appropriate. These standards, developed by the NOAA Pacific Marine Environmental Laboratory (PMEL) (http://www.pmel.noaa.gov/epic/), provide a universal vernacular allowing researchers to share data without translation.

  18. Tools and strategies for instrument monitoring, data mining and data access

    NASA Astrophysics Data System (ADS)

    van Hees, R. M., ,, Dr

    2009-04-01

    The ever growing size of data sets produced by various satellite instruments creates a challenge in data management. Three main tasks were identified: instrument performance monitoring, data mining by users and data deployment. In this presentation, I will discuss the three tasks and our solution. As a practical example to illustrate the problem and make the discussion less abstract, I will use Sciamachy on-board the ESA satellite Envisat. Since the launch of Envisat, in March 2002, Sciamachy has performed nearly a billion science measurements and performed daily calibrations measurements. The total size of the data set (not including reprocessed data) is over 30 TB, distributed over 150,000 files. [Instrument Monitoring] Most instruments produce house-keeping data, which may include time, geo-location, temperature of different parts of the instrument and instrument settings and configuration. In addition, many instruments perform calibration measurements. Instrument performance monitoring requires automated analyzes of critical parameters for events, and the option to off-line inspect the behavior of various parameters in time. We choose to extract the necessary information from the SCIAMACHY data products, and store everything in one file, where we separated house-keeping data from calibration measurements. Due to the large volume and the need to have quick random-access, the Hierarchical Data Format (HDF5) was our obvious choice. The HDF5 format is self describing and designed to organize different types of data in one file. For example, one data set may contain the meta data of the calibration measurements: time, geo-location, instrument settings, quality parameters (temperature of the instrument), while a second large data set contains the actual measurements. The HDF5 high-level packet table API is ideal for tables that only grow (by appending rows), while the HDF5 table API is better suited for tables where rows need to be updated, inserted or replaced. In particular, the packet table API allows very compact storage of compound data sets and very fast read/write access. Details about this implementation and pitfalls will be given in the presentation. [Data Mining] The ability to select relevant data is a requirement that all data centers have to offer. The NL-SCIA-DC allows the users to select data using several criteria including: time, geo-location, type of observation and data quality. The result of the query are [i] location and name of relevant data products (files), or [ii] listing of meta data of the relevant measurements, or [iii] listing of the measurements (level 2 or higher). For this application, we need the power of a relational database, the SQL language, and the availability of spatial functions. PostgreSQL, extended with postGIS support turned out to be a good choice. Common queries on tables with millions of rows can be executed within seconds. [Data Deployment] The dissemination of scientific data is often cumbersome by the usage of many different formats to store the products. Therefore, time-consuming and inefficient conversions are needed to use data products from different origin. Within the Atmospheric Data Access for the Geospatial User Community (ADAGUC) project we provide selected space borne atmospheric and land data sets in the same data format and consistent internal structure, so that users can easily use and combine data. The common format for storage is HDF5, but the netCDF-4 API is used to create the data sets. The standard for metadata and dataset attributes follow the netCDF Climate and Forecast conventions, in addition metadata complies to the ISO 19115:2003 INSPIRE profile are added. The advantage of netCDF-4 is that the API is essentially equal to netCDF-3 (with a few extensions), while the data format is HDF5 (recognized by many scientific tools). The added metadata ensures product traceability. Details will be given in the presentation and several posters.

  19. Enabling data-driven provenance in NetCDF, via OGC WPS operations. Climate Analysis services use case.

    NASA Astrophysics Data System (ADS)

    Mihajlovski, A.; Spinuso, A.; Plieger, M.; Som de Cerff, W.

    2016-12-01

    Modern Climate analysis platforms provide generic and standardized ways of accessing data and processing services. These are typically supported by a wide range of OGC formats and interfaces. However, the problem of instrumentally tracing the lineage of the transformations occurring on a dataset and its provenance remains an open challenge. It requires standard-driven and interoperable solutions to facilitate understanding, sharing of self-describing data products, fostering collaboration among peers. The CLIPC portal provided us real use case, where the need of an instrumented provenance management is fundamental. CLIPC provides a single point of access for scientific information on climate change. The data about the physical environment which is used to inform climate change policy and adaptation measures comes from several categories: satellite measurements, terrestrial observing systems, model projections and simulations and from re-analyses. This is made possible through the Copernicus Earth Observation Programme for Europe. With a backbone combining WPS and OPeNDAP services, CLIPC has two themes: 1. Harmonized access to climate datasets derived from models, observations and re-analyses 2. A climate impact tool kit to evaluate, rank and aggregate indicators The climate impact tool kit is realised with the orchestration of a number of WPS that ingest, normalize and combine NetCDF files. The WPS allowing this specific computation are hosted by the climate4impact portal, which is a more generic climate data-access and processing service. In this context, guaranteeing validation and reproducibility of results, is a clearly stated requirement to improve the quality of the results obtained by the combined analysis Two core contributions made, are the enabling of a provenance wrapper around WPS services and the enabling of provenance tracing within the NetCDF format, which adopts and extends the W3C's PROV model. To disseminate indicator data and create transformed data products, a standardized provenance, metadata and processing infrastructure is researched for CLIPC. These efforts will lead towards the provision of tools for further web service processing development and optimisation, opening up possibilities to scale and administer abstract users and data driven workflows.

  20. Obtaining and processing Daymet data using Python and ArcGIS

    USGS Publications Warehouse

    Bohms, Stefanie

    2013-01-01

    This set of scripts was developed to automate the process of downloading and mosaicking daily Daymet data to a user defined extent using ArcGIS and Python programming language. The three steps are downloading the needed Daymet tiles for the study area extent, converting the netcdf file to a tif raster format, and mosaicking those rasters to one file. The set of scripts is intended for all levels of experience with Python programming language and requires no scripting by the user.

  1. An Innovative Open Data-driven Approach for Improved Interpretation of Coverage Data at NASA JPL's PO.DAA

    NASA Astrophysics Data System (ADS)

    McGibbney, L. J.; Armstrong, E. M.

    2016-12-01

    Figuratively speaking, Scientific Datasets (SD) are shared by data producers in a multitude of shapes, sizes and flavors. Primarily however they exist as machine-independent manifestations supporting the creation, access, and sharing of array-oriented SD that can on occasion be spread across multiple files. Within the Earth Sciences, the most notable general examples include the HDF family, NetCDF, etc. with other formats such as GRIB being used pervasively within specific domains such as the Oceanographic, Atmospheric and Meteorological sciences. Such file formats contain Coverage Data e.g. a digital representation of some spatio-temporal phenomenon. A challenge for large data producers such as NASA and NOAA as well as consumers of coverage datasets (particularly surrounding visualization and interactive use within web clients) is that this is still not a straight-forward issue due to size, serialization and inherent complexity. Additionally existing data formats are either unsuitable for the Web (like netCDF files) or hard to interpret independently due to missing standard structures and metadata (e.g. the OPeNDAP protocol). Therefore alternative, Web friendly manifestations of such datasets are required.CoverageJSON is an emerging data format for publishing coverage data to the web in a web-friendly, way which fits in with the linked data publication paradigm hence lowering the barrier for interpretation by consumers via mobile devices and client applications, etc. as well as data producers who can build next generation Web friendly Web services around datasets. This work will detail how CoverageJSON is being evaluated at NASA JPL's PO.DAAC as an enabling data representation format for publishing SD as Linked Open Data embedded within SD landing pages as well as via semantic data repositories. We are currently evaluating how utilization of CoverageJSON within SD landing pages addresses the long-standing acknowledgement that SD producers are not currently addressing content-based optimization within their SD landing pages for better crawlability by commercial search engines.

  2. Exposing Coverage Data to the Semantic Web within the MELODIES project: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Riechert, Maik; Blower, Jon; Griffiths, Guy

    2016-04-01

    Coverage data, typically big in data volume, assigns values to a given set of spatiotemporal positions, together with metadata on how to interpret those values. Existing storage formats like netCDF, HDF and GeoTIFF all have various restrictions that prevent them from being preferred formats for use over the web, especially the semantic web. Factors that are relevant here are the processing complexity, the semantic richness of the metadata, and the ability to request partial information, such as a subset or just the appropriate metadata. Making coverage data available within web browsers opens the door to new ways for working with such data, including new types of visualization and on-the-fly processing. As part of the European project MELODIES (http://melodiesproject.eu) we look into the challenges of exposing such coverage data in an interoperable and web-friendly way, and propose solutions using a host of emerging technologies like JSON-LD, the DCAT and GeoDCAT-AP ontologies, the CoverageJSON format, and new approaches to REST APIs for coverage data. We developed the CoverageJSON format within the MELODIES project as an additional way to expose coverage data to the web, next to having simple rendered images available using standards like OGC's WMS. CoverageJSON partially incorporates JSON-LD but does not encode individual data values as semantic resources, making use of the technology in a practical manner. The development also focused on it being a potential output format for OGC WCS. We will demonstrate how existing netCDF data can be exposed as CoverageJSON resources on the web together with a REST API that allows users to explore the data and run operations such as spatiotemporal subsetting. We will show various use cases from the MELODIES project, including reclassification of a Land Cover dataset client-side within the browser with the ability for the user to influence the reclassification result by making use of the above technologies.

  3. Using OPeNDAP's Data-Services Framework to Lift Mash-Ups above Blind Dates

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Fulker, D. W.

    2015-12-01

    OPeNDAP's data-as-service framework (Hyrax) matches diverse sources with many end-user tools and contexts. Keys to its flexibility include: A data model embracing tabular data alongside n-dim arrays and other structures useful in geoinformatics. A REST-like protocol that supports—via suffix notation—a growing set of output forms (netCDF, XML, etc.) plus a query syntax for subsetting. Subsetting applies (via constraints on column values) to tabular data or (via constraints on indices or coordinates) to array-style data . A handler-style architecture that admits a growing set of input types. Community members may contribute handlers, making Hyrax effective as middleware, where N sources are mapped to M outputs with order N+M effort (not NxM). Hyrax offers virtual aggregations of source data, enabling granularity aimed at users, not data-collectors. OPeNDAP-access libraries exist in multiple languages, including Python, Java, and C++. Recent enhancements are increasing this framework's interoperability (i.e., its mash-up) potential. Extensions implemented as servlets—running adjacent to Hyrax—are enriching the forms of aggregation and enabling new protocols: User-specified aggregations, namely, applying a query to (huge) lists of source granules, and receiving one (large) table or zipped netCDF file. OGC (Open Geospatial Consortium) protocols, WMS and WCS. A Webification (W10n) protocol that returns JavaScript Object Notation (JSON). Extensions to OPeNDAP's query language are reducing transfer volumes and enabling new forms of inspection. Advances underway include: Functions that, for triangular-mesh sources, return sub-meshes spec'd via geospatial bounding boxes. Functions that, for data from multiple, satellite-borne sensors (with differing orbits), select observations based on coincidence. Calculations of means, histograms, etc. that greatly reduce output volumes.. Paths for communities to contribute new server functions (in Python, e.g.) that data providers may incorporate into Hyrax via installation parameters. One could say Hyrax itself is a mash-up, but we suggest it as an instrument for a mash-up artist's toolbox. This instrument can support mash-ups built on netCDF files, OGC protocols, JavaScript Web pages, and/or programs written in Python, Java, C or C++.

  4. The Ocean Observatories Initiative: Data Acquisition Functions and Its Built-In Automated Python Modules

    NASA Astrophysics Data System (ADS)

    Smith, M. J.; Vardaro, M.; Crowley, M. F.; Glenn, S. M.; Schofield, O.; Belabbassi, L.; Garzio, L. M.; Knuth, F.; Fram, J. P.; Kerfoot, J.

    2016-02-01

    The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, provides users with access to long-term datasets from a variety of oceanographic sensors. The Endurance Array in the Pacific Ocean consists of two separate lines off the coasts of Oregon and Washington. The Oregon line consists of 7 moorings, two cabled benthic experiment packages and 6 underwater gliders. The Washington line comprises 6 moorings and 6 gliders. Each mooring is outfitted with a variety of instrument packages. The raw data from these instruments are sent to shore via satellite communication and in some cases, via fiber optic cable. Raw data is then sent to the cyberinfrastructure (CI) group at Rutgers where it is aggregated, parsed into thousands of different data streams, and integrated into a software package called uFrame. The OOI CI delivers the data to the general public via a web interface that outputs data into commonly used scientific data file formats such as JSON, netCDF, and CSV. The Rutgers data management team has developed a series of command-line Python tools that streamline data acquisition in order to facilitate the QA/QC review process. The first step in the process is querying the uFrame database for a list of all available platforms. From this list, a user can choose a specific platform and automatically download all available datasets from the specified platform. The downloaded dataset is plotted using a generalized Python netcdf plotting routine that utilizes a data visualization toolbox called matplotlib. This routine loads each netCDF file separately and outputs plots by each available parameter. These Python tools have been uploaded to a Github repository that is openly available to help facilitate OOI data access and visualization.

  5. Data Container Study for Handling Array-based Data Using Rasdaman, Hive, Spark, and MongoDB

    NASA Astrophysics Data System (ADS)

    Xu, M.; Hu, F.; Yu, M.; Scheele, C.; Liu, K.; Huang, Q.; Yang, C. P.; Little, M. M.

    2016-12-01

    Geoscience communities have come up with various big data storage solutions, such as Rasdaman and Hive, to address the grand challenges for massive Earth observation data management and processing. To examine the readiness of current solutions in supporting big Earth observation, we propose to investigate and compare four popular data container solutions, including Rasdaman, Hive, Spark, and MongoDB. Using different types of spatial and non-spatial queries, datasets stored in common scientific data formats (e.g., NetCDF and HDF), and two applications (i.e. dust storm simulation data mining and MERRA data analytics), we systematically compare and evaluate the feature and performance of these four data containers in terms of data discover and access. The computing resources (e.g. CPU, memory, hard drive, network) consumed while performing various queries and operations are monitored and recorded for the performance evaluation. The initial results show that 1) Rasdaman has the best performance for queries on statistical and operational functions, and supports NetCDF data format better than HDF; 2) Rasdaman clustering configuration is more complex than the others; 3) Hive performs better on single pixel extraction from multiple images; and 4) Except for the single pixel extractions, Spark performs better than Hive and its performance is close to Rasdaman. A comprehensive report will detail the experimental results, and compare their pros and cons regarding system performance, ease of use, accessibility, scalability, compatibility, and flexibility.

  6. JADDS - towards a tailored global atmospheric composition data service for CAMS forecasts and reanalysis

    NASA Astrophysics Data System (ADS)

    Stein, Olaf; Schultz, Martin G.; Rambadt, Michael; Saini, Rajveer; Hoffmann, Lars; Mallmann, Daniel

    2017-04-01

    Global model data of atmospheric composition produced by the Copernicus Atmospheric Monitoring Service (CAMS) is collected since 2010 at FZ Jülich and serves as boundary condition for use by Regional Air Quality (RAQ) modellers world-wide. RAQ models need time-resolved meteorological as well as chemical lateral boundary conditions for their individual model domains. While the meteorological data usually come from well-established global forecast systems, the chemical boundary conditions are not always well defined. In the past, many models used 'climatic' boundary conditions for the tracer concentrations, which can lead to significant concentration biases, particularly for tracers with longer lifetimes which can be transported over long distances (e.g. over the whole northern hemisphere) with the mean wind. The Copernicus approach utilizes extensive near-realtime data assimilation of atmospheric composition data observed from space which gives additional reliability to the global modelling data and is well received by the RAQ communities. An existing Web Coverage Service (WCS) for sharing these individually tailored model results is currently being re-engineered to make use of a modern, scalable database technology in order to improve performance, enhance flexibility, and allow the operation of catalogue services. The new Jülich Atmospheric Data Distributions Server (JADDS) adheres to the Web Coverage Service WCS2.0 standard as defined by the Open Geospatial Consortium OGC. This enables the user groups to flexibly define datasets they need by selecting a subset of chemical species or restricting geographical boundaries or the length of the time series. The data is made available in the form of different catalogues stored locally on our server. In addition, the Jülich OWS Interface (JOIN) provides interoperable web services allowing for easy download and visualization of datasets delivered from WCS servers via the internet. We will present the prototype JADDS server and address the major issues identified when relocating large four-dimensional datasets into a RASDAMAN raster array database. So far the RASDAMAN support for data available in netCDF format is limited with respect to metadata related to variables and axes. For community-wide accepted solutions, selected data coverages shall result in downloadable netCDF files including metadata complying with the netCDF CF Metadata Conventions standard (http://cfconventions.org/). This can be achieved by adding custom metadata elements for RASDAMAN bands (model levels) on data ingestion. Furthermore, an optimization strategy for ingestion of several TB of 4D model output data will be outlined.

  7. The Value of Data and Metadata Standardization for Interoperability in Giovanni

    NASA Astrophysics Data System (ADS)

    Smit, C.; Hegde, M.; Strub, R. F.; Bryant, K.; Li, A.; Petrenko, M.

    2017-12-01

    Giovanni (https://giovanni.gsfc.nasa.gov/giovanni/) is a data exploration and visualization tool at the NASA Goddard Earth Sciences Data Information Services Center (GES DISC). It has been around in one form or another for more than 15 years. Giovanni calculates simple statistics and produces 22 different visualizations for more than 1600 geophysical parameters from more than 90 satellite and model products. Giovanni relies on external data format standards to ensure interoperability, including the NetCDF CF Metadata Conventions. Unfortunately, these standards were insufficient to make Giovanni's internal data representation truly simple to use. Finding and working with dimensions can be convoluted with the CF Conventions. Furthermore, the CF Conventions are silent on machine-friendly descriptive metadata such as the parameter's source product and product version. In order to simplify analyzing disparate earth science data parameters in a unified way, we developed Giovanni's internal standard. First, the format standardizes parameter dimensions and variables so they can be easily found. Second, the format adds all the machine-friendly metadata Giovanni needs to present our parameters to users in a consistent and clear manner. At a glance, users can grasp all the pertinent information about parameters both during parameter selection and after visualization. This poster gives examples of how our metadata and data standards, both external and internal, have both simplified our code base and improved our users' experiences.

  8. Coupling West WRF to GSSHA with GSSHApy

    NASA Astrophysics Data System (ADS)

    Snow, A. D.

    2017-12-01

    The West WRF output data is in the gridded NetCDF output format containing the required forcing data needed to run a GSSHA simulation. These data include precipitation, pressure, temperature, relative humidity, cloud cover, wind speed, and solar radiation. Tools to reproject, resample, and reformat the data for GSSHA have recently been added to the open source Python library GSSHApy (https://github.com/ci-water/gsshapy). These tools have created a connection that has made it possible to run forecasts using the West WRF forcing data with GSSHA to produce both streamflow and lake level predictions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, Michael J.

    SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.

  10. NASA Briefing for Unidata

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2016-01-01

    The NASA representative to the Unidata Strategic Committee presented a semiannual update on NASAs work with and use of Unidata technologies. The talk covered the program of cloud computing prototypes being undertaken for the Earth Observing System Data and Information System (EOSDIS). Also discussed were dataset interoperability recommendations ratified via the EOSDIS Standards Office and the HDF Product Designer tool with respect to its possible applicability to data in network Common Data Form (NetCDF) version 4.

  11. Community Intercomparison Suite (CIS) v1.4.0: a tool for intercomparing models and observations

    NASA Astrophysics Data System (ADS)

    Watson-Parris, Duncan; Schutgens, Nick; Cook, Nicholas; Kipling, Zak; Kershaw, Philip; Gryspeerdt, Edward; Lawrence, Bryan; Stier, Philip

    2016-09-01

    The Community Intercomparison Suite (CIS) is an easy-to-use command-line tool which has been developed to allow the straightforward intercomparison of remote sensing, in situ and model data. While there are a number of tools available for working with climate model data, the large diversity of sources (and formats) of remote sensing and in situ measurements necessitated a novel software solution. Developed by a professional software company, CIS supports a large number of gridded and ungridded data sources "out-of-the-box", including climate model output in NetCDF or the UK Met Office pp file format, CloudSat, CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization), MODIS (MODerate resolution Imaging Spectroradiometer), Cloud and Aerosol CCI (Climate Change Initiative) level 2 satellite data and a number of in situ aircraft and ground station data sets. The open-source architecture also supports user-defined plugins to allow many other sources to be easily added. Many of the key operations required when comparing heterogenous data sets are provided by CIS, including subsetting, aggregating, collocating and plotting the data. Output data are written to CF-compliant NetCDF files to ensure interoperability with other tools and systems. The latest documentation, including a user manual and installation instructions, can be found on our website (http://cistools.net). Here, we describe the need which this tool fulfils, followed by descriptions of its main functionality (as at version 1.4.0) and plugin architecture which make it unique in the field.

  12. Rescue, Archival and Discovery of Tsunami Events on Marigrams

    NASA Astrophysics Data System (ADS)

    Eble, M. C.; Wright, L. M.; Stroker, K. J.; Sweeney, A.; Lancaster, M.

    2017-12-01

    The Big Earth Data Initiative made possible the reformatting of paper marigram records on which were recorded measurements of the 1946, 1952, 1960, and 1964 tsunamis generated in the Pacific Ocean. Data contained within each record were determined to be invaluable for tsunami researchers and operational agencies with a responsibility for issuing warnings during a tsunami event. All marigrams were carefully digitized and metadata were generated to form numerical datasets in order to provide the tsunami and other research and application-driven communities with quality data. Data were then packaged as CF-compliant netCDF datafiles and submitted to the NOAA Centers for Environmental Information for long-term stewardship, archival, and public discovery of both original scanned images and data in digital netCDF and CSC formats. The PNG plots of each time series were generated and included with data packages to provide a visual representation of the numerical data sets. ISO-compliant metadata were compiled for the collection at the event level and individual DOIs were minted for each of the four events included in this project. The procedure followed to reformat each record in this four-event subset of the larger NCEI scanned marigram inventory is presented and discussed. The practical use of these data is presented to highlight that even infrequent measurements of tsunamis hold information that may potentially help constrain earthquake rupture area, provide estimates of earthquake co-seismic slip distribution, identify subsidence or uplift, and significantly increase the holdings of situ data available for tsunami model validation. These same data may also prove valuable to the broader global tide community for validation and further development of tide models and for investigation into the stability of tidal harmonic constants. Data reformatted as part of this project are PARR compliant and meet the requirements for Data Management, Discoverability, Accessibility, Documentation, Readability, and Data Preservation and Stewardship as per the Big Earth Data Initiative.

  13. A NetCDF version of the two-dimensional energy balance model based on the full multigrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Kelin; North, Gerald R.; Stevens, Mark J.

    A NetCDF version of the two-dimensional energy balance model based on the full multigrid method in Fortran is introduced for both pedagogical and research purposes. Based on the land-sea-ice distribution, orbital elements, greenhouse gases concentration, and albedo, the code calculates the global seasonal surface temperature. A step-by-step guide with examples is provided for practice.

  14. The Weather Radar Toolkit, National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center's support of interoperability and the Global Earth Observation System of Systems (GEOSS)

    NASA Astrophysics Data System (ADS)

    Ansari, S.; Del Greco, S.

    2006-12-01

    In February 2005, 61 countries around the World agreed on a 10 year plan to work towards building open systems for sharing geospatial data and services across different platforms worldwide. This system is known as the Global Earth Observation System of Systems (GEOSS). The objective of GEOSS focuses on easy access to environmental data and interoperability across different systems allowing participating countries to measure the "pulse" of the planet in an effort to advance society. In support of GEOSS goals, NOAA's National Climatic Data Center (NCDC) has developed radar visualization and data exporter tools in an open systems environment. The NCDC Weather Radar Toolkit (WRT) loads Weather Surveillance Radar 1988 Doppler (WSR-88D) volume scan (S-band) data, known as Level-II, and derived products, known as Level-III, into an Open Geospatial Consortium (OGC) compliant environment. The application is written entirely in Java and will run on any Java- supported platform including Windows, Macintosh and Linux/Unix. The application is launched via Java Web Start and runs on the client machine while accessing these data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT allows the data to be manipulated to create custom mosaics, composites and precipitation estimates. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. By decoding the various Radar formats into the NetCDF Common Data Model, the exported NetCDF data becomes interoperable with existing software packages including THREDDS Data Server and the Integrated Data Viewer (IDV). The NCDC recently partnered with NOAA's National Severe Storms Lab (NSSL) to decode Sigmet C-band Doppler radar data providing the NCDC Viewer/Data Exporter the functionality to read C-Band. This also supports a bilateral agreement between the United States and Canada for data sharing and to support interoperability with the US WSR-88D and Environment Canada radar networks. In addition, the NCDC partnered with the University of Oklahoma to develop decoders to read a test bed of distributed X- band radars that are funded through the Collaborative Adaptive Sensing of the Atmosphere (CASA) project. The NCDC is also archiving the National Mosaic and Next Generation QPE (Q2) products from NSSL, which provide products such as three-dimensional reflectivity, composite reflectivity and precipitation estimates at a 1 km resolution. These three sources of Radar data are also supported in the WRT.

  15. Xray: N-dimensional, labeled arrays for analyzing physical datasets in Python

    NASA Astrophysics Data System (ADS)

    Hoyer, S.

    2015-12-01

    Efficient analysis of geophysical datasets requires tools that both preserve and utilize metadata, and that transparently scale to process large datas. Xray is such a tool, in the form of an open source Python library for analyzing the labeled, multi-dimensional array (tensor) datasets that are ubiquitous in the Earth sciences. Xray's approach pairs Python data structures based on the data model of the netCDF file format with the proven design and user interface of pandas, the popular Python data analysis library for labeled tabular data. On top of the NumPy array, xray adds labeled dimensions (e.g., "time") and coordinate values (e.g., "2015-04-10"), which it uses to enable a host of operations powered by these labels: selection, aggregation, alignment, broadcasting, split-apply-combine, interoperability with pandas and serialization to netCDF/HDF5. Many of these operations are enabled by xray's tight integration with pandas. Finally, to allow for easy parallelism and to enable its labeled data operations to scale to datasets that does not fit into memory, xray integrates with the parallel processing library dask.

  16. Observations from the GOES Space Environment Monitor and Solar X-ray Imager are now available in a whole new way!

    NASA Astrophysics Data System (ADS)

    Wilkinson, D. C.

    2012-12-01

    NOAA's Geosynchronous Operational Environmental Satellites (GOES) have been observing the environment in near-earth-space for over 37 years. Those data are down-linked and processed by the Space Weather Prediction Center (SWPC) and form the cornerstone of their alert and forecast services. At the close of each UT day these data are ingested by the National Geophysical Data Center (NGDC) where they are merged into the national archive and made available to the user community in a uniform manner. In 2012 NGDC unveiled a RESTful web service for accessing these data. What does this mean? Users can now build a web-like URL using simple predefined constructs that allows their browser or custom software to directly access the relational archives and bundle the requested data into a variety of popular formats. The user can select precisely the data they need and the results are delivered immediately. NGDC understands that many users are perfectly happy retrieving data via pre-generated files and will continue to provide internally documented NetCDF and CSV files far into the future.

  17. Observations from the GOES Space Environment Monitor and Solar X-ray Imager are now available in a whole new way!

    NASA Astrophysics Data System (ADS)

    Wilkinson, D. C.

    2013-12-01

    NOAA's Geosynchronous Operational Environmental Satellites (GOES) have been observing the environment in near-earth-space for over 37 years. Those data are down-linked and processed by the Space Weather Prediction Center (SWPC) and form the cornerstone of their alert and forecast services. At the close of each UT day these data are ingested by the National Geophysical Data Center (NGDC) where they are merged into the national archive and made available to the user community in a uniform manner. In 2012 NGDC unveiled a RESTful web service for accessing these data. What does this mean? Users can now build a web-like URL using simple predefined constructs that allows their browser or custom software to directly access the relational archives and bundle the requested data into a variety of popular formats. The user can select precisely the data they need and the results are delivered immediately. NGDC understands that many users are perfectly happy retrieving data via pre-generated files and will continue to provide internally documented NetCDF and CSV files far into the future.

  18. Damsel: A Data Model Storage Library for Exascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok; Liao, Wei-keng

    Computational science applications have been described as having one of seven motifs (the “seven dwarfs”), each having a particular pattern of computation and communication. From a storage and I/O perspective, these applications can also be grouped into a number of data model motifs describing the way data is organized and accessed during simulation, analysis, and visualization. Major storage data models developed in the 1990s, such as Network Common Data Format (netCDF) and Hierarchical Data Format (HDF) projects, created support for more complex data models. Development of both netCDF and HDF5 was influenced by multi-dimensional dataset storage requirements, but their accessmore » models and formats were designed with sequential storage in mind (e.g., a POSIX I/O model). Although these and other high-level I/O libraries have had a beneficial impact on large parallel applications, they do not always attain a high percentage of peak I/O performance due to fundamental design limitations, and they do not address the full range of current and future computational science data models. The goal of this project is to enable exascale computational science applications to interact conveniently and efficiently with storage through abstractions that match their data models. The project consists of three major activities: (1) identifying major data model motifs in computational science applications and developing representative benchmarks; (2) developing a data model storage library, called Damsel, that supports these motifs, provides efficient storage data layouts, incorporates optimizations to enable exascale operation, and is tolerant to failures; and (3) productizing Damsel and working with computational scientists to encourage adoption of this library by the scientific community. The product of this project, Damsel library, is openly available for download from http://cucis.ece.northwestern.edu/projects/DAMSEL. Several case studies and application programming interface reference are also available to assist new users to learn to use the library.« less

  19. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  20. The UGRID Reader - A ParaView Plugin for the Visualization of Unstructured Climate Model Data in NetCDF Format

    NASA Astrophysics Data System (ADS)

    Brisc, Felicia; Vater, Stefan; Behrens, Joern

    2016-04-01

    We present the UGRID Reader, a visualization software component that implements the UGRID Conventions into Paraview. It currently supports the reading and visualization of 2D unstructured triangular, quadrilateral and mixed triangle/quadrilateral meshes, while the data can be defined per cell or per vertex. The Climate and Forecast Metadata Conventions (CF Conventions) have been set for many years as the standard framework for climate data written in NetCDF format. While they allow storing unstructured data simply as data defined at a series of points, they do not currently address the topology of the underlying unstructured mesh. However, it is often necessary to have additional mesh topology information, i.e. is it a one dimensional network, a 2D triangular mesh or a flexible mixed triangle/quadrilateral mesh, a 2D mesh with vertical layers, or a fully unstructured 3D mesh. The UGRID Conventions proposed by the UGRID Interoperability group are attempting to fill in this void by extending the CF Conventions with topology specifications. As the UGRID Conventions are increasingly popular with an important subset of the CF community, they warrant the development of a customized tool for the visualization and exploration of UGRID-conforming data. The implementation of the UGRID Reader has been designed corresponding to the ParaView plugin architecture. This approach allowed us to tap into the powerful reading and rendering capabilities of ParaView, while the reader is easy to install. We aim at parallelism to be able to process large data sets. Furthermore, our current application of the reader is the visualization of higher order simulation output which demands for a special representation of the data within a cell.

  1. Extending netCDF and CF conventions to support enhanced Earth Observation Ontology services: the Prod-Trees project

    NASA Astrophysics Data System (ADS)

    Mazzetti, Paolo; Valentin, Bernard; Koubarakis, Manolis; Nativi, Stefano

    2013-04-01

    Access to Earth Observation products remains not at all straightforward for end users in most domains. Semantically-enabled search engines, generally accessible through Web portals, have been developed. They allow searching for products by selecting application-specific terms and specifying basic geographical and temporal filtering criteria. Although this mostly suits the needs of the general public, the scientific communities require more advanced and controlled means to find products. Ranges of validity, traceability (e.g. origin, applied algorithms), accuracy, uncertainty, are concepts that are typically taken into account in research activities. The Prod-Trees (Enriching Earth Observation Ontology Services using Product Trees) project will enhance the CF-netCDF product format and vocabulary to allow storing metadata that better describe the products, and in particular EO products. The project will bring a standardized solution that permits annotating EO products in such a manner that official and third-party software libraries and tools will be able to search for products using advanced tags and controlled parameter names. Annotated EO products will be automatically supported by all the compatible software. Because the entire product information will come from the annotations and the standards, there will be no need for integrating extra components and data structures that have not been standardized. In the course of the project, the most important and popular open-source software libraries and tools will be extended to support the proposed extensions of CF-netCDF. The result will be provided back to the respective owners and maintainers for ensuring the best dissemination and adoption of the extended format. The project, funded by ESA, has started in December 2012 and will end in May 2014. It is coordinated by Space Applications Services, and the Consortium includes CNR-IIA and the National and Kapodistrian University of Athens. The first activities included the elicitation of user requirements in order to identify gaps in the current CF and netCDF specification for providing an extended support of the discovery of EO data. To this aim a Validation Group has been established including members from organizations actively using netCDF and CF standards. A questionnaire has been prepared and submitted to the Validation Group; it was aimed for being filled online, but also for guiding interviews. The presentation will focus on the project objectives, the first achievements with particular reference to the results of the requirements analysis and future plans.

  2. A Climate Statistics Tool and Data Repository

    NASA Astrophysics Data System (ADS)

    Wang, J.; Kotamarthi, V. R.; Kuiper, J. A.; Orr, A.

    2017-12-01

    Researchers at Argonne National Laboratory and collaborating organizations have generated regional scale, dynamically downscaled climate model output using Weather Research and Forecasting (WRF) version 3.3.1 at a 12km horizontal spatial resolution over much of North America. The WRF model is driven by boundary conditions obtained from three independent global scale climate models and two different future greenhouse gas emission scenarios, named representative concentration pathways (RCPs). The repository of results has a temporal resolution of three hours for all the simulations, includes more than 50 variables, is stored in Network Common Data Form (NetCDF) files, and the data volume is nearly 600Tb. A condensed 800Gb set of NetCDF files were made for selected variables most useful for climate-related planning, including daily precipitation, relative humidity, solar radiation, maximum temperature, minimum temperature, and wind. The WRF model simulations are conducted for three 10-year time periods (1995-2004, 2045-2054, and 2085-2094), and two future scenarios RCP4.5 and RCP8.5). An open-source tool was coded using Python 2.7.8 and ESRI ArcGIS 10.3.1 programming libraries to parse the NetCDF files, compute summary statistics, and output results as GIS layers. Eight sets of summary statistics were generated as examples for the contiguous U.S. states and much of Alaska, including number of days over 90°F, number of days with a heat index over 90°F, heat waves, monthly and annual precipitation, drought, extreme precipitation, multi-model averages, and model bias. This paper will provide an overview of the project to generate the main and condensed data repositories, describe the Python tool and how to use it, present the GIS results of the computed examples, and discuss some of the ways they can be used for planning. The condensed climate data, Python tool, computed GIS results, and documentation of the work are shared on the Internet.

  3. The GEON Integrated Data Viewer (IDV) and IRIS DMC Services Illustrate CyberInfrastructure Support for Seismic Data Visualization and Interpretation

    NASA Astrophysics Data System (ADS)

    Meertens, C.; Wier, S.; Ahern, T.; Casey, R.; Weertman, B.; Laughbon, C.

    2008-12-01

    UNAVCO and the IRIS DMC are data service partners for seismic visualization, particularly for hypocentral data and tomography. UNAVCO provides the GEON Integrated Data Viewer (IDV), an extension of the Unidata IDV, a free, interactive, research-level, software display and analysis tool for data in 3D (latitude, longitude, depth) and 4D (with time), located on or inside the Earth. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three- dimensional geoscience data in the context of new remote and shared data sources. The GEON IDV supports data access from data sources using HTTP and FTP servers, OPeNDAP servers, THREDDS catalogs, RSS feeds, and WMS (web map) servers. The IRIS DMC (Data Management System) has developed web services providing data for earthquake hypocentral data and seismic tomography model grids. These services can be called by the GEON IDV to access data at IRIS without copying files. The IRIS Earthquake Browser (IEB) is a web-based query tool for hypocentral data. The IEB combines the DMC's large database of more than 1,900,000 earthquakes with the Google Maps web interface. With the IEB you can quickly find earthquakes in any region of the globe and then import this information into the GEON Integrated Data Viewer where the hypocenters may be visualized. You can select earthquakes by location region, time, depth, and magnitude. The IEB gives the IDV a URL to the selected data. The IDV then shows the data as maps or 3D displays, with interactive control of vertical scale, area, map projection, with symbol size and color control by magnitude or depth. The IDV can show progressive time animation of, for example, aftershocks filling a source region. The IRIS Tomoserver converts seismic tomography model output grids to NetCDF for use in the IDV. The Tomoserver accepts a tomographic model file as input from a user and provides an equivalent NetCDF file as output. The service supports NA04, S3D, A1D and CUB input file formats, contributed by their respective creators. The NetCDF file is saved to a location that can be referenced with a URL on an IRIS server. The URL for the NetCDF file is provided to the user. The user can download the data from IRIS, or copy the URL into IDV directly for interpretation, and the IDV will access the data at IRIS. The Tomoserver conversion software was developed by Instrumental Software Technologies, Inc. Use cases with the GEON IDV and IRIS DMC data services will be shown.

  4. The survey on data format of Earth observation satellite data at JAXA.

    NASA Astrophysics Data System (ADS)

    Matsunaga, M.; Ikehata, Y.

    2017-12-01

    JAXA's earth observation satellite data are distributed by a portal web site for search and deliver called "G-Portal". Users can download the satellite data of GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 from G-Portal. However, these data formats are different by each satellite like HDF4, HDF5, NetCDF4, CEOS, etc., and which formats are not familiar to new data users. Although the HDF type self-describing format is very convenient and useful for big dataset information, old-type format product is not readable by open GIS tool nor apply OGC standard. Recently, the satellite data are widely used to be applied to the various needs such as disaster, earth resources, monitoring the global environment, Geographic Information System(GIS) and so on. In order to remove a barrier of using Earth Satellite data for new community users, JAXA has been providing the format-converted product like GeoTIFF or KMZ. In addition, JAXA provides format conversion tool itself. We investigate the trend of data format for data archive, data dissemination and data utilization, then we study how to improve the current product format for various application field users and make a recommendation for new product.

  5. Data Access Services that Make Remote Sensing Data Easier to Use

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher

    2010-01-01

    This slide presentation reviews some of the processes that NASA uses to make the remote sensing data easy to use over the World Wide Web. This work involves much research into data formats, geolocation structures and quality indicators, often to be followed by coding a preprocessing program. Only then are the data usable within the analysis tool of choice. The Goddard Earth Sciences Data and Information Services Center is deploying a variety of data access services that are designed to dramatically shorten the time consumed in the data preparation step. On-the-fly conversion to the standard network Common Data Form (netCDF) format with Climate-Forecast (CF) conventions imposes a standard coordinate system framework that makes data instantly readable through several tools, such as the Integrated Data Viewer, Gridded Analysis and Display System, Panoply and Ferret. A similar benefit is achieved by serving data through the Open Source Project for a Network Data Access Protocol (OPeNDAP), which also provides subsetting. The Data Quality Screening Service goes a step further in filtering out data points based on quality control flags, based on science team recommendations or user-specified criteria. Further still is the Giovanni online analysis system which goes beyond handling formatting and quality to provide visualization and basic statistics of the data. This general approach of automating the preparation steps has the important added benefit of enabling use of the data by non-human users (i.e., computer programs), which often make sub-optimal use of the available data due to the need to hard-code data preparation on the client side.

  6. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  7. Common Data Format (CDF) and Coordinated Data Analysis Web (CDAWeb)

    NASA Technical Reports Server (NTRS)

    Candey, Robert M.

    2010-01-01

    The Coordinated Data Analysis Web (CDAWeb) data browsing system provides plotting, listing and open access v ia FTP, HTTP, and web services (REST, SOAP, OPeNDAP) for data from mo st NASA Heliophysics missions and is heavily used by the community. C ombining data from many instruments and missions enables broad resear ch analysis and correlation and coordination with other experiments a nd missions. Crucial to its effectiveness is the use of a standard se lf-describing data format, in this case, the Common Data Format (CDF) , also developed at the Space Physics Data facility , and the use of metadata standa rds (easily edited with SKTeditor ). CDAweb is based on a set of IDL routines, CDAWlib . . The CDF project also maintains soft ware and services for translating between many standard formats (CDF. netCDF, HDF, FITS, XML) .

  8. MetaQuant: a tool for the automatic quantification of GC/MS-based metabolome data.

    PubMed

    Bunk, Boyke; Kucklick, Martin; Jonas, Rochus; Münch, Richard; Schobert, Max; Jahn, Dieter; Hiller, Karsten

    2006-12-01

    MetaQuant is a Java-based program for the automatic and accurate quantification of GC/MS-based metabolome data. In contrast to other programs MetaQuant is able to quantify hundreds of substances simultaneously with minimal manual intervention. The integration of a self-acting calibration function allows the parallel and fast calibration for several metabolites simultaneously. Finally, MetaQuant is able to import GC/MS data in the common NetCDF format and to export the results of the quantification into Systems Biology Markup Language (SBML), Comma Separated Values (CSV) or Microsoft Excel (XLS) format. MetaQuant is written in Java and is available under an open source license. Precompiled packages for the installation on Windows or Linux operating systems are freely available for download. The source code as well as the installation packages are available at http://bioinformatics.org/metaquant

  9. Task 28: Web Accessible APIs in the Cloud Trade Study

    NASA Technical Reports Server (NTRS)

    Gallagher, James; Habermann, Ted; Jelenak, Aleksandar; Lee, Joe; Potter, Nathan; Yang, Muqun

    2017-01-01

    This study explored three candidate architectures for serving NASA Earth Science Hierarchical Data Format Version 5 (HDF5) data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the project are: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and Network Common Data Format Version 4 (netCDF4) data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3).Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.

  10. GOME/ERS-2: New Homogeneous Level 1B Data from an Old Instrument

    NASA Astrophysics Data System (ADS)

    Slijkhuis, S.; Aberle, B.; Coldewey-Egbers, M.; Loyola, D.; Dehn, A.; Fehr, T.

    2015-11-01

    In the framework of ESA's "GOME Evolution Project", a reprocessing will be made of the entire 16 year GOME Level 1 dataset. The GOME Evolution Project further includes the generation of a new GOME water vapour product, and a public outreach programme.In this paper we will describe the reprocessing of the Level 1 data, carried out with the latest version of the GOME Data Processor at DLR. The change most visible to the user will be the new product format in NetCDF, plus supporting documentation (ATBD and PUM). Full-mission reprocessed L1b data are expected to be released in the 4th quarter of 2015.

  11. Web-based CERES Clouds QC Property Viewing Tool

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Minnis, P.

    2014-12-01

    This presentation will display the capabilities of a web-based CERES cloud property viewer. Terra data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool. A laptop will hopefully be available to allow conference attendees to try navigating the tool.

  12. Construction of Hierarchical Models for Fluid Dynamics in Earth and Planetary Sciences : DCMODEL project

    NASA Astrophysics Data System (ADS)

    Takahashi, Y. O.; Takehiro, S.; Sugiyama, K.; Odaka, M.; Ishiwatari, M.; Sasaki, Y.; Nishizawa, S.; Ishioka, K.; Nakajima, K.; Hayashi, Y.

    2012-12-01

    Toward the understanding of fluid motions of planetary atmospheres and planetary interiors by performing multiple numerical experiments with multiple models, we are now proceeding ``dcmodel project'', where a series of hierarchical numerical models with various complexity is developed and maintained. In ``dcmodel project'', a series of the numerical models are developed taking care of the following points: 1) a common ``style'' of program codes assuring readability of the software, 2) open source codes of the models to the public, 3) scalability of the models assuring execution on various scales of computational resources, 4) stressing the importance of documentation and presenting a method for writing reference manuals. The lineup of the models and utility programs of the project is as follows: Gtool5, ISPACK/SPML, SPMODEL, Deepconv, Dcpam, and Rdoc-f95. In the followings, features of each component are briefly described. Gtool5 (Ishiwatari et al., 2012) is a Fortran90 library, which provides data input/output interfaces and various utilities commonly used in the models of dcmodel project. A self-descriptive data format netCDF is adopted as a IO format of Gtool5. The interfaces of gtool5 library can reduce the number of operation steps for the data IO in the program code of the models compared with the interfaces of the raw netCDF library. Further, by use of gtool5 library, procedures for data IO and addition of metadata for post-processing can be easily implemented in the program codes in a consolidated form independent of the size and complexity of the models. ``ISPACK'' is the spectral transformation library and ``SPML (SPMODEL library)'' (Takehiro et al., 2006) is its wrapper library. Most prominent feature of SPML is a series of array-handling functions with systematic function naming rules, and this enables us to write codes with a form which is easily deduced from the mathematical expressions of the governing equations. ``SPMODEL'' (Takehiro et al., 2006) is a collection of various sample programs using ``SPML''. These sample programs provide the basekit for simple numerical experiments of geophysical fluid dynamics. For example, SPMODEL includes 1-dimensional KdV equation model, 2-dimensional barotropic, shallow water, Boussinesq models, 3-dimensional MHD dynamo models in rotating spherical shells. These models are written in the common style in harmony with SPML functions. ``Deepconv'' (Sugiyama et al., 2010) and ``Dcpam'' are a cloud resolving model and a general circulation model for the purpose of applications to the planetary atmospheres, respectively. ``Deepconv'' includes several physical processes appropriate for simulations of Jupiter and Mars atmospheres, while ``Dcpam'' does for simulations of Earth, Mars, and Venus-like atmospheres. ``Rdoc-f95'' is a automatic generator of reference manuals of Fortran90/95 programs, which is an extension of ruby documentation tool kit ``rdoc''. It analyzes dependency of modules, functions, and subroutines in the multiple program source codes. At the same time, it can list up the namelist variables in the programs.

  13. Workflow-Oriented Cyberinfrastructure for Sensor Data Analytics

    NASA Astrophysics Data System (ADS)

    Orcutt, J. A.; Rajasekar, A.; Moore, R. W.; Vernon, F.

    2015-12-01

    Sensor streams comprise an increasingly large part of Earth Science data. Analytics based on sensor data require an easy way to perform operations such as acquisition, conversion to physical units, metadata linking, sensor fusion, analysis and visualization on distributed sensor streams. Furthermore, embedding real-time sensor data into scientific workflows is of growing interest. We have implemented a scalable networked architecture that can be used to dynamically access packets of data in a stream from multiple sensors, and perform synthesis and analysis across a distributed network. Our system is based on the integrated Rule Oriented Data System (irods.org), which accesses sensor data from the Antelope Real Time Data System (brtt.com), and provides virtualized access to collections of data streams. We integrate real-time data streaming from different sources, collected for different purposes, on different time and spatial scales, and sensed by different methods. iRODS, noted for its policy-oriented data management, brings to sensor processing features and facilities such as single sign-on, third party access control lists ( ACLs), location transparency, logical resource naming, and server-side modeling capabilities while reducing the burden on sensor network operators. Rich integrated metadata support also makes it straightforward to discover data streams of interest and maintain data provenance. The workflow support in iRODS readily integrates sensor processing into any analytical pipeline. The system is developed as part of the NSF-funded Datanet Federation Consortium (datafed.org). APIs for selecting, opening, reaping and closing sensor streams are provided, along with other helper functions to associate metadata and convert sensor packets into NetCDF and JSON formats. Near real-time sensor data including seismic sensors, environmental sensors, LIDAR and video streams are available through this interface. A system for archiving sensor data and metadata in NetCDF format has been implemented and will be demonstrated at AGU.

  14. The Ophidia framework: toward cloud-based data analytics for climate change

    NASA Astrophysics Data System (ADS)

    Fiore, Sandro; D'Anca, Alessandro; Elia, Donatello; Mancini, Marco; Mariello, Andrea; Mirto, Maria; Palazzo, Cosimo; Aloisio, Giovanni

    2015-04-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in the climate change domain. It provides parallel (server-side) data analysis, an internal storage model and a hierarchical data organization to manage large amount of multidimensional scientific data. The Ophidia analytics platform provides several MPI-based parallel operators to manipulate large datasets (data cubes) and array-based primitives to perform data analysis on large arrays of scientific data. The most relevant data analytics use cases implemented in national and international projects target fire danger prevention (OFIDIA), interactions between climate change and biodiversity (EUBrazilCC), climate indicators and remote data analysis (CLIP-C), sea situational awareness (TESSA), large scale data analytics on CMIP5 data in NetCDF format, Climate and Forecast (CF) convention compliant (ExArch). Two use cases regarding the EU FP7 EUBrazil Cloud Connect and the INTERREG OFIDIA projects will be presented during the talk. In the former case (EUBrazilCC) the Ophidia framework is being extended to integrate scalable VM-based solutions for the management of large volumes of scientific data (both climate and satellite data) in a cloud-based environment to study how climate change affects biodiversity. In the latter one (OFIDIA) the data analytics framework is being exploited to provide operational support regarding processing chains devoted to fire danger prevention. To tackle the project challenges, data analytics workflows consisting of about 130 operators perform, among the others, parallel data analysis, metadata management, virtual file system tasks, maps generation, rolling of datasets, import/export of datasets in NetCDF format. Finally, the entire Ophidia software stack has been deployed at CMCC on 24-nodes (16-cores/node) of the Athena HPC cluster. Moreover, a cloud-based release tested with OpenNebula is also available and running in the private cloud infrastructure of the CMCC Supercomputing Centre.

  15. ESA Atmospheric Toolbox

    NASA Astrophysics Data System (ADS)

    Niemeijer, Sander

    2017-04-01

    The ESA Atmospheric Toolbox (BEAT) is one of the ESA Sentinel Toolboxes. It consists of a set of software components to read, analyze, and visualize a wide range of atmospheric data products. In addition to the upcoming Sentinel-5P mission it supports a wide range of other atmospheric data products, including those of previous ESA missions, ESA Third Party missions, Copernicus Atmosphere Monitoring Service (CAMS), ground based data, etc. The toolbox consists of three main components that are called CODA, HARP and VISAN. CODA provides interfaces for direct reading of data from earth observation data files. These interfaces consist of command line applications, libraries, direct interfaces to scientific applications (IDL and MATLAB), and direct interfaces to programming languages (C, Fortran, Python, and Java). CODA provides a single interface to access data in a wide variety of data formats, including ASCII, binary, XML, netCDF, HDF4, HDF5, CDF, GRIB, RINEX, and SP3. HARP is a toolkit for reading, processing and inter-comparing satellite remote sensing data, model data, in-situ data, and ground based remote sensing data. The main goal of HARP is to assist in the inter-comparison of datasets. By appropriately chaining calls to HARP command line tools one can pre-process datasets such that two datasets that need to be compared end up having the same temporal/spatial grid, same data format/structure, and same physical unit. The toolkit comes with its own data format conventions, the HARP format, which is based on netcdf/HDF. Ingestion routines (based on CODA) allow conversion from a wide variety of atmospheric data products to this common format. In addition, the toolbox provides a wide range of operations to perform conversions on the data such as unit conversions, quantity conversions (e.g. number density to volume mixing ratios), regridding, vertical smoothing using averaging kernels, collocation of two datasets, etc. VISAN is a cross-platform visualization and analysis application for atmospheric data and can be used to visualize and analyze the data that you retrieve using the CODA and HARP interfaces. The application uses the Python language as the means through which you provide commands to the application. The Python interfaces for CODA and HARP are included so you can directly ingest product data from within VISAN. Powerful visualization functionality for 2D plots and geographical plots in VISAN will allow you to directly visualize the ingested data. All components from the ESA Atmospheric Toolbox are Open Source and freely available. Software packages can be downloaded from the BEAT website: http://stcorp.nl/beat/

  16. Comparing NetCDF and SciDB on managing and querying 5D hydrologic dataset

    NASA Astrophysics Data System (ADS)

    Liu, Haicheng; Xiao, Xiao

    2016-11-01

    Efficiently extracting information from high dimensional hydro-meteorological modelling datasets requires smart solutions. Traditional methods are mostly based on files, which can be edited and accessed handily. But they have problems of efficiency due to contiguous storage structure. Others propose databases as an alternative for advantages such as native functionalities for manipulating multidimensional (MD) arrays, smart caching strategy and scalability. In this research, NetCDF file based solutions and the multidimensional array database management system (DBMS) SciDB applying chunked storage structure are benchmarked to determine the best solution for storing and querying 5D large hydrologic modelling dataset. The effect of data storage configurations including chunk size, dimension order and compression on query performance is explored. Results indicate that dimension order to organize storage of 5D data has significant influence on query performance if chunk size is very large. But the effect becomes insignificant when chunk size is properly set. Compression of SciDB mostly has negative influence on query performance. Caching is an advantage but may be influenced by execution of different query processes. On the whole, NetCDF solution without compression is in general more efficient than the SciDB DBMS.

  17. LASP Time Series Server (LaTiS): Overcoming Data Access Barriers via a Common Data Model in the Middle Tier (Invited)

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Wilson, A.

    2010-12-01

    The Laboratory for Atmospheric and Space Physics at the University of Colorado has developed an Open Source, OPeNDAP compliant, Java Servlet based, RESTful web service to serve time series data. In addition to handling OPeNDAP style requests and returning standard responses, existing modules for alternate output formats can be reused or customized. It is also simple to reuse or customize modules to directly read various native data sources and even to perform some processing on the server. The server is built around a common data model based on the Unidata Common Data Model (CDM) which merges the NetCDF, HDF, and OPeNDAP data models. The server framework features a modular architecture that supports pluggable Readers, Writers, and Filters via the common interface to the data, enabling a workflow that reads data from their native form, performs some processing on the server, and presents the results to the client in its preferred form. The service is currently being used operationally to serve time series data for the LASP Interactive Solar Irradiance Data Center (LISIRD, http://lasp.colorado.edu/lisird/) and as part of the Time Series Data Server (TSDS, http://tsds.net/). I will present the data model and how it enables reading, writing, and processing concerns to be separated into loosely coupled components. I will also share thoughts for evolving beyond the time series abstraction and providing a general purpose data service that can be orchestrated into larger workflows.

  18. Development of a Multilayer MODIS IST-Albedo Product of Greenland

    NASA Technical Reports Server (NTRS)

    Hall, D. K.; Comiso, J. C.; Cullather, R. I.; Digirolamo, N. E.; Nowicki, S. M.; Medley, B. C.

    2017-01-01

    A new multilayer IST-albedo Moderate Resolution Imaging Spectroradiometer (MODIS) product of Greenland was developed to meet the needs of the ice sheet modeling community. The multiple layers of the product enable the relationship between IST and albedo to be evaluated easily. Surface temperature is a fundamental input for dynamical ice sheet models because it is a component of the ice sheet radiation budget and mass balance. Albedo influences absorption of incoming solar radiation. The daily product will combine the existing standard MODIS Collection-6 ice-surface temperature, derived melt maps, snow albedo and water vapor products. The new product is available in a polar stereographic projection in NetCDF format. The product will ultimately extend from March 2000 through the end of 2017.

  19. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.« less

  20. Long-term oceanographic observations in Massachusetts Bay, 1989-2006

    USGS Publications Warehouse

    Butman, Bradford; Alexander, P. Soupy; Bothner, Michael H.; Borden, Jonathan; Casso, Michael A.; Gutierrez, Benjamin T.; Hastings, Mary E.; Lightsom, Frances L.; Martini, Marinna A.; Montgomery, Ellyn T.; Rendigs, Richard R.; Strahle, William S.

    2009-01-01

    This data report presents long-term oceanographic observations made in western Massachusetts Bay at long-term site A (LT-A) (42 deg 22.6' N., 70 deg 47.0' W.; nominal water depth 32 meters) from December 1989 through February 2006 and long-term site B (LT-B) (42 deg 9.8' N., 70 deg 38.4' W.; nominal water depth 22 meters) from October 1997 through February 2004 (fig. 1). The observations were collected as part of a U.S. Geological Survey (USGS) study designed to understand the transport and long-term fate of sediments and associated contaminants in Massachusetts Bay. The observations include time-series measurements of current, temperature, salinity, light transmission, pressure, oxygen, fluorescence, and sediment-trapping rate. About 160 separate mooring or tripod deployments were made on about 90 research cruises to collect these long-term observations. This report presents a description of the 16-year field program and the instrumentation used to make the measurements, an overview of the data set, more than 2,500 pages of statistics and plots that summarize the data, and the digital data in Network Common Data Form (NetCDF) format. This research was conducted by the USGS in cooperation with the Massachusetts Water Resources Authority and the U.S. Coast Guard.

  1. GIS Services, Visualization Products, and Interoperability at the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center (NCDC)

    NASA Astrophysics Data System (ADS)

    Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.

    2007-12-01

    The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through standards which promote interoperability. Our GIS related products provide Open Geospatial Consortium (OGC) compliant Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Federal Geographic Data Committee (FGDC) metadata as a complement to the map viewers. KML/KMZ data files (soon to be compliant OGC specifications) also provide access.

  2. An Adaptable Seismic Data Format for Modern Scientific Workflows

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Bozdag, E.; Krischer, L.; Lefebvre, M.; Lei, W.; Podhorszki, N.; Tromp, J.

    2013-12-01

    Data storage, exchange, and access play a critical role in modern seismology. Current seismic data formats, such as SEED, SAC, and SEG-Y, were designed with specific applications in mind and are frequently a major bottleneck in implementing efficient workflows. We propose a new modern parallel format that can be adapted for a variety of seismic workflows. The Adaptable Seismic Data Format (ASDF) features high-performance parallel read and write support and the ability to store an arbitrary number of traces of varying sizes. Provenance information is stored inside the file so that users know the origin of the data as well as the precise operations that have been applied to the waveforms. The design of the new format is based on several real-world use cases, including earthquake seismology and seismic interferometry. The metadata is based on the proven XML schemas StationXML and QuakeML. Existing time-series analysis tool-kits are easily interfaced with this new format so that seismologists can use robust, previously developed software packages, such as ObsPy and the SAC library. ADIOS, netCDF4, and HDF5 can be used as the underlying container format. At Princeton University, we have chosen to use ADIOS as the container format because it has shown superior scalability for certain applications, such as dealing with big data on HPC systems. In the context of high-performance computing, we have implemented ASDF into the global adjoint tomography workflow on Oak Ridge National Laboratory's supercomputer Titan.

  3. SysSon - A Framework for Systematic Sonification Design

    NASA Astrophysics Data System (ADS)

    Vogt, Katharina; Goudarzi, Visda; Holger Rutz, Hanns

    2015-04-01

    SysSon is a research approach on introducing sonification systematically to a scientific community where it is not yet commonly used - e.g., in climate science. Thereby, both technical and socio-cultural barriers have to be met. The approach was further developed with climate scientists, who participated in contextual inquiries, usability tests and a workshop of collaborative design. Following from these extensive user tests resulted our final software framework. As frontend, a graphical user interface allows climate scientists to parametrize standard sonifications with their own data sets. Additionally, an interactive shell allows to code new sonifications for users competent in sound design. The framework is a standalone desktop application, available as open source (for details see http://sysson.kug.ac.at/) and works with data in NetCDF format.

  4. The PEcAn Project: Accessible Tools for On-demand Ecosystem Modeling

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Kooper, R.; LeBauer, D.; Desai, A. R.; Mantooth, J.; Dietze, M.

    2014-12-01

    Ecosystem models play a critical role in understanding the terrestrial biosphere and forecasting changes in the carbon cycle, however current forecasts have considerable uncertainty. The amount of data being collected and produced is increasing on daily basis as we enter the "big data" era, but only a fraction of this data is being used to constrain models. Until we can improve the problems of model accessibility and model-data communication, none of these resources can be used to their full potential. The Predictive Ecosystem Analyzer (PEcAn) is an ecoinformatics toolbox and a set of workflows that wrap around an ecosystem model and manage the flow of information in and out of regional-scale TBMs. Here we present new modules developed in PEcAn to manage the processing of meteorological data, one of the primary driver dependencies for ecosystem models. The module downloads, reads, extracts, and converts meteorological observations to Unidata Climate Forecast (CF) NetCDF community standard, a convention used for most climate forecast and weather models. The module also automates the conversion from NetCDF to model specific formats, including basic merging, gap-filling, and downscaling procedures. PEcAn currently supports tower-based micrometeorological observations at Ameriflux and FluxNET sites, site-level CSV-formatted data, and regional and global reanalysis products such as the North American Regional Reanalysis and CRU-NCEP. The workflow is easily extensible to additional products and processing algorithms.These meteorological workflows have been coupled with the PEcAn web interface and now allow anyone to run multiple ecosystem models for any location on the Earth by simply clicking on an intuitive Google-map based interface. This will allow users to more readily compare models to observations at those sites, leading to better calibration and validation. Current work is extending these workflows to also process field, remotely-sensed, and historical observations of vegetation composition and structure. The processing of heterogeneous met and veg data within PEcAn is made possible using the Brown Dog cyberinfrastructure tools for unstructured data.

  5. Usability and Interoperability Improvements for an EASE-Grid 2.0 Passive Microwave Data Product Using CF Conventions

    NASA Astrophysics Data System (ADS)

    Hardman, M.; Brodzik, M. J.; Long, D. G.

    2017-12-01

    Beginning in 1978, the satellite passive microwave data record has been a mainstay of remote sensing of the cryosphere, providing twice-daily, near-global spatial coverage for monitoring changes in hydrologic and cryospheric parameters that include precipitation, soil moisture, surface water, vegetation, snow water equivalent, sea ice concentration and sea ice motion. Historical versions of the gridded passive microwave data sets were produced as flat binary files described in human-readable documentation. This format is error-prone and makes it difficult to reliably include all processing and provenance. Funded by NASA MEaSUREs, we have completely reprocessed the gridded data record that includes SMMR, SSM/I-SSMIS and AMSR-E. The new Calibrated Enhanced-Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR) files are self-describing. Our approach to the new data set was to create netCDF4 files that use standard metadata conventions and best practices to incorporate file-level, machine- and human-readable contents, geolocation, processing and provenance metadata. We followed the flexible and adaptable Climate and Forecast (CF-1.6) Conventions with respect to their coordinate conventions and map projection parameters. Additionally, we made use of Attribute Conventions for Dataset Discovery (ACDD-1.3) that provided file-level conventions with spatio-temporal bounds that enable indexing software to search for coverage. Our CETB files also include temporal coverage and spatial resolution in the file-level metadata for human-readability. We made use of the JPL CF/ACDD Compliance Checker to guide this work. We tested our file format with real software, for example, netCDF Command-line Operators (NCO) power tools for unlimited control on spatio-temporal subsetting and concatenation of files. The GDAL tools understand the CF metadata and produce fully-compliant geotiff files from our data. ArcMap can then reproject the geotiff files on-the-fly and work with other geolocated data such as coastlines, with no special work required. We expect this combination of standards and well-tested interoperability to significantly improve the usability of this important ESDR for the Earth Science community.

  6. A Columnar Storage Strategy with Spatiotemporal Index for Big Climate Data

    NASA Astrophysics Data System (ADS)

    Hu, F.; Bowen, M. K.; Li, Z.; Schnase, J. L.; Duffy, D.; Lee, T. J.; Yang, C. P.

    2015-12-01

    Large collections of observational, reanalysis, and climate model output data may grow to as large as a 100 PB in the coming years, so climate dataset is in the Big Data domain, and various distributed computing frameworks have been utilized to address the challenges by big climate data analysis. However, due to the binary data format (NetCDF, HDF) with high spatial and temporal dimensions, the computing frameworks in Apache Hadoop ecosystem are not originally suited for big climate data. In order to make the computing frameworks in Hadoop ecosystem directly support big climate data, we propose a columnar storage format with spatiotemporal index to store climate data, which will support any project in the Apache Hadoop ecosystem (e.g. MapReduce, Spark, Hive, Impala). With this approach, the climate data will be transferred into binary Parquet data format, a columnar storage format, and spatial and temporal index will be built and attached into the end of Parquet files to enable real-time data query. Then such climate data in Parquet data format could be available to any computing frameworks in Hadoop ecosystem. The proposed approach is evaluated using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. Experimental results show that this approach could efficiently overcome the gap between the big climate data and the distributed computing frameworks, and the spatiotemporal index could significantly accelerate data querying and processing.

  7. Development of an Operational TS Dataset Production System for the Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Kim, Sung Dae; Park, Hyuk Min; Kim, Young Ho; Park, Kwang Soon

    2017-04-01

    An operational TS (Temperature and Salinity) dataset production system was developed to provide near real-time data to the data assimilation system periodically. It collects the latest 15 days' TS data of the north western pacific area (20°N - 55°N, 110°E - 150°E), applies QC tests to the archived data and supplies them to numerical prediction models of KIOST (Korea Institute of Ocean Science and Technology). The latest real-time TS data are collected from Argo GDAC and GTSPP data server every week. Argo data are downloaded from /latest_data directory of Argo GDAC. Because many duplicated data exist when all profile data are extracted from all Argo netCDF files, DB system is used to avoid duplication. All metadata (float ID, location, observation date and time, etc) of all Argo floats is stored into Database system and a Matlab program was developed to manipulate DB data, to check the duplication and to exclude duplicated data. GTSPP data are downloaded from /realtime directory of GTSPP data service. The latest data except ARGO data are extracted from the original data. Another Matlab program was coded to inspect all collected data using 10 QC tests and produce final dataset which can be used by the assimilation system. Three regional range tests to inspect annual, seasonal and monthly variations are included in the QC procedures. The C program was developed to provide regional ranges to data managers. It can calculate upper limit and lower limit of temperature and salinity at depth from 0 to 1550m. The final TS dataset contains the latest 15 days' TS data in netCDF format. It is updated every week and transmitted to numerical modeler of KIOST for operational use.

  8. Web-based CERES Clouds QC Property Viewing Tool

    NASA Astrophysics Data System (ADS)

    Smith, R. A.

    2015-12-01

    Churngwei Chu1, Rita Smith1, Sunny Sun-Mack1, Yan Chen1, Elizabeth Heckert1, Patrick Minnis21 Science Systems and Applications, Inc., Hampton, Virginia2 NASA Langley Research Center, Hampton, Virginia This presentation will display the capabilities of a web-based CERES cloud property viewer. Aqua/Terra/NPP data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool.

  9. Atmospheric data access for the geospatial user community

    NASA Astrophysics Data System (ADS)

    van de Vegte, John; Som de Cerff, Wim-Jan; van den Oord, Gijsbertus H. J.; Sluiter, Raymond; van der Neut, Ian A.; Plieger, Maarten; van Hees, Richard M.; de Jeu, Richard A. M.; Schaepman, Michael E.; Hoogerwerf, Marc R.; Groot, Nikée E.; Domenico, Ben; Nativi, Stefano; Wilhelmi, Olga V.

    2007-10-01

    Historically the atmospheric and meteorological communities are separate worlds with their own data formats and tools for data handling making sharing of data difficult and cumbersome. On the other hand, these information sources are becoming increasingly of interest outside these communities because of the continuously improving spatial and temporal resolution of e.g. model and satellite data and the interest in historical datasets. New user communities that use geographically based datasets in a cross-domain manner are emerging. This development is supported by the progress made in Geographical Information System (GIS) software. The current GIS software is not yet ready for the wealth of atmospheric data, although the faint outlines of new generation software are already visible: support of HDF, NetCDF and an increasing understanding of temporal issues are only a few of the hints.

  10. Design of FastQuery: How to Generalize Indexing and Querying System for Scientific Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jerry; Wu, Kesheng

    2011-04-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies such as FastBit are critical for facilitating interactive exploration of large datasets. These technologies rely on adding auxiliary information to existing datasets to accelerate query processing. To use these indices, we need to match the relational data model used by the indexing systems with the array data model used by most scientific data, and to provide an efficient input and output layer for reading and writing the indices. In this work, we present a flexible design that can be easily applied to most scientific datamore » formats. We demonstrate this flexibility by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using simulation data from the particle accelerator and climate simulation communities. To demonstrate the effectiveness of the new design, we also present a detailed performance study using both synthetic and real scientific workloads.« less

  11. Kelly et al. (2016): Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter

    EPA Pesticide Factsheets

    In this study, modeled gas- and aerosol phase ammonia, nitric acid, and hydrogen chloride are compared to measurements taken during a field campaign conducted in northern Colorado in February and March 2011. We compare the modeled and observed gas-particle partitioning, and assess potential reasons for discrepancies between the model and measurements. This data set contains scripts and data used for each figure in the associated manuscript. Figures are generated using the R project statistical programming language. Data files are in either comma-separated value (CSV) format or netCDF, a standard self-describing binary data format commonly used in the earth and atmospheric sciences. This dataset is associated with the following publication:Kelly , J., K. Baker , C. Nolte, S. Napelenok , W.C. Keene, and A.A.P. Pszenny. Simulating the phase partitioning of NH3, HNO3, and HCl with size-resolved particles over northern Colorado in winter. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 131: 67-77, (2016).

  12. Climate Data Provenance Tracking for Just-In-Time Computation

    NASA Astrophysics Data System (ADS)

    Fries, S.; Nadeau, D.; Doutriaux, C.; Williams, D. N.

    2016-12-01

    The "Climate Data Management System" (CDMS) was created in 1996 as part of the Climate Data Analysis Tools suite of software. It provides a simple interface into a wide variety of climate data formats, and creates NetCDF CF-Compliant files. It leverages the NumPy framework for high performance computation, and is an all-in-one IO and computation package. CDMS has been extended to track manipulations of data, and trace that data all the way to the original raw data. This extension tracks provenance about data, and enables just-in-time (JIT) computation. The provenance for each variable is packaged as part of the variable's metadata, and can be used to validate data processing and computations (by repeating the analysis on the original data). It also allows for an alternate solution for sharing analyzed data; if the bandwidth for a transfer is prohibitively expensive, the provenance serialization can be passed in a much more compact format and the analysis rerun on the input data. Data provenance tracking in CDMS enables far-reaching and impactful functionalities, permitting implementation of many analytical paradigms.

  13. Figure4

    EPA Pesticide Factsheets

    NetCDF files of PBL height (m), Shortwave Radiation, 10 m wind speed from WRF and Ozone from CMAQ. The data is the standard deviation of these variables for each hour of the 4 day simulation. Figure 4 is only one of the time periods: June 8, 2100 UTC. The NetCDF files have a time stamp (Times) that can be used to find this time in order to reproduce the Figure 4. Also included is a data dictionary that describes the domain and all other attributes of the model simulation.This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  14. eWaterCycle visualisation. combining the strength of NetCDF and Web Map Service: ncWMS

    NASA Astrophysics Data System (ADS)

    Hut, R.; van Meersbergen, M.; Drost, N.; Van De Giesen, N.

    2016-12-01

    As a result of the eWatercycle global hydrological forecast we have created Cesium-ncWMS, a web application based on ncWMS and Cesium. ncWMS is a server side application capable of reading any NetCDF file written using the Climate and Forecasting (CF) conventions, and making the data available as a Web Map Service(WMS). ncWMS automatically determines available variables in a file, and creates maps colored according to map data and a user selected color scale. Cesium is a Javascript 3D virtual Globe library. It uses WebGL for rendering, which makes it very fast, and it is capable of displaying a wide variety of data types such as vectors, 3D models, and 2D maps. The forecast results are automatically uploaded to our web server running ncWMS. In turn, the web application can be used to change the settings for color maps and displayed data. The server uses the settings provided by the web application, together with the data in NetCDF to provide WMS image tiles, time series data and legend graphics to the Cesium-NcWMS web application. The user can simultaneously zoom in to the very high resolution forecast results anywhere on the world, and get time series data for any point on the globe. The Cesium-ncWMS visualisation combines a global overview with local relevant information in any browser. See the visualisation live at forecast.ewatercycle.org

  15. Interoperability Using Lightweight Metadata Standards: Service & Data Casting, OpenSearch, OPM Provenance, and Shared SciFlo Workflows

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E.

    2011-12-01

    Under several NASA grants, we are generating multi-sensor merged atmospheric datasets to enable the detection of instrument biases and studies of climate trends over decades of data. For example, under a NASA MEASURES grant we are producing a water vapor climatology from the A-Train instruments, stratified by the Cloudsat cloud classification for each geophysical scene. The generation and proper use of such multi-sensor climate data records (CDR's) requires a high level of openness, transparency, and traceability. To make the datasets self-documenting and provide access to full metadata and traceability, we have implemented a set of capabilities and services using known, interoperable protocols. These protocols include OpenSearch, OPeNDAP, Open Provenance Model, service & data casting technologies using Atom feeds, and REST-callable analysis workflows implemented as SciFlo (XML) documents. We advocate that our approach can serve as a blueprint for how to openly "document and serve" complex, multi-sensor CDR's with full traceability. The capabilities and services provided include: - Discovery of the collections by keyword search, exposed using OpenSearch protocol; - Space/time query across the CDR's granules and all of the input datasets via OpenSearch; - User-level configuration of the production workflows so that scientists can select additional physical variables from the A-Train to add to the next iteration of the merged datasets; - Efficient data merging using on-the-fly OPeNDAP variable slicing & spatial subsetting of data out of input netCDF and HDF files (without moving the entire files); - Self-documenting CDR's published in a highly usable netCDF4 format with groups used to organize the variables, CF-style attributes for each variable, numeric array compression, & links to OPM provenance; - Recording of processing provenance and data lineage into a query-able provenance trail in Open Provenance Model (OPM) format, auto-captured by the workflow engine; - Open Publishing of all of the workflows used to generate products as machine-callable REST web services, using the capabilities of the SciFlo workflow engine; - Advertising of the metadata (e.g. physical variables provided, space/time bounding box, etc.) for our prepared datasets as "datacasts" using the Atom feed format; - Publishing of all datasets via our "DataDrop" service, which exploits the WebDAV protocol to enable scientists to access remote data directories as local files on their laptops; - Rich "web browse" of the CDR's with full metadata and the provenance trail one click away; - Advertising of all services as Google-discoverable "service casts" using the Atom format. The presentation will describe our use of the interoperable protocols and demonstrate the capabilities and service GUI's.

  16. Recommendations resulting from the SPDS Community-Wide Workshop

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Data Systems Panel identified three critical functionalities of a Space Physics Data System (SPDS): the delivery of self-documenting data, the existence of a matrix of translators between various standard formats (IDFS, CDF, netCDF, HDF, TENNIS, UCLA flat file, and FITS), and a network-based capability for browsing and examining inventory records for the system's data holdings. The recommendations resulting from the workshop include the philosophy, funding, and objectives of a SPDS. Access to quality data is seen as the most important objective by the Policy Panel, with curation and information about the data being integral parts of any accessible data set. The Data Issues Panel concluded that the SPDS can supply encouragement, guidelines, and ultimately provide a mechanism for financial support for data archiving, restoration, and curation. The Software Panel of the SPDS focused on defining the requirements and priorities for SPDS to support common data analysis and data visualization tools and packages.

  17. HYDRA Hyperspectral Data Research Application Tom Rink and Tom Whittaker

    NASA Astrophysics Data System (ADS)

    Rink, T.; Whittaker, T.

    2005-12-01

    HYDRA is a freely available, easy to install tool for visualization and analysis of large local or remote hyper/multi-spectral datasets. HYDRA is implemented on top of the open source VisAD Java library via Jython - the Java implementation of the user friendly Python programming language. VisAD provides data integration, through its generalized data model, user-display interaction and display rendering. Jython has an easy to read, concise, scripting-like, syntax which eases software development. HYDRA allows data sharing of large datasets through its support of the OpenDAP and OpenADDE server-client protocols. The users can explore and interrogate data, and subset in physical and/or spectral space to isolate key areas of interest for further analysis without having to download an entire dataset. It also has an extensible data input architecture to recognize new instruments and understand different local file formats, currently NetCDF and HDF4 are supported.

  18. Woods Hole Image Processing System Software implementation; using NetCDF as a software interface for image processing

    USGS Publications Warehouse

    Paskevich, Valerie F.

    1992-01-01

    The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.

  19. Scientific Data Storage for Cloud Computing

    NASA Astrophysics Data System (ADS)

    Readey, J.

    2014-12-01

    Traditionally data storage used for geophysical software systems has centered on file-based systems and libraries such as NetCDF and HDF5. In contrast cloud based infrastructure providers such as Amazon AWS, Microsoft Azure, and the Google Cloud Platform generally provide storage technologies based on an object based storage service (for large binary objects) complemented by a database service (for small objects that can be represented as key-value pairs). These systems have been shown to be highly scalable, reliable, and cost effective. We will discuss a proposed system that leverages these cloud-based storage technologies to provide an API-compatible library for traditional NetCDF and HDF5 applications. This system will enable cloud storage suitable for geophysical applications that can scale up to petabytes of data and thousands of users. We'll also cover other advantages of this system such as enhanced metadata search.

  20. EverVIEW: a visualization platform for hydrologic and Earth science gridded data

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Suir, Kevin J.; Conzelmann, Craig

    2015-01-01

    The EverVIEW Data Viewer is a cross-platform desktop application that combines and builds upon multiple open source libraries to help users to explore spatially-explicit gridded data stored in Network Common Data Form (NetCDF). Datasets are displayed across multiple side-by-side geographic or tabular displays, showing colorized overlays on an Earth globe or grid cell values, respectively. Time-series datasets can be animated to see how water surface elevation changes through time or how habitat suitability for a particular species might change over time under a given scenario. Initially targeted toward Florida's Everglades restoration planning, EverVIEW has been flexible enough to address the varied needs of large-scale planning beyond Florida, and is currently being used in biological planning efforts nationally and internationally.

  1. 'tomo_display' and 'vol_tools': IDL VM Packages for Tomography Data Reconstruction, Processing, and Visualization

    NASA Astrophysics Data System (ADS)

    Rivers, M. L.; Gualda, G. A.

    2009-05-01

    One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk, and (3) generate MPEG movies for inclusion in presentations, publications, websites, etc. Both are freely available as run-time ('.sav') versions that can be run using the free IDL Virtual Machine TM, available from ITT Visual Information Solutions: http://www.ittvis.com/ProductServices/IDL/VirtualMachine.aspx The run-time versions of 'tomo_display' and 'vol_tools' can be downloaded from: http://cars.uchicago.edu/software/idl/tomography.html http://sites.google.com/site/voltools/

  2. Interactive Visualization and Analysis of Geospatial Data Sets - TrikeND-iGlobe

    NASA Astrophysics Data System (ADS)

    Rosebrock, Uwe; Hogan, Patrick; Chandola, Varun

    2013-04-01

    The visualization of scientific datasets is becoming an ever-increasing challenge as advances in computing technologies have enabled scientists to build high resolution climate models that have produced petabytes of climate data. To interrogate and analyze these large datasets in real-time is a task that pushes the boundaries of computing hardware and software. But integration of climate datasets with geospatial data requires considerable amount of effort and close familiarity of various data formats and projection systems, which has prevented widespread utilization outside of climate community. TrikeND-iGlobe is a sophisticated software tool that bridges this gap, allows easy integration of climate datasets with geospatial datasets and provides sophisticated visualization and analysis capabilities. The objective for TrikeND-iGlobe is the continued building of an open source 4D virtual globe application using NASA World Wind technology that integrates analysis of climate model outputs with remote sensing observations as well as demographic and environmental data sets. This will facilitate a better understanding of global and regional phenomenon, and the impact analysis of climate extreme events. The critical aim is real-time interactive interrogation. At the data centric level the primary aim is to enable the user to interact with the data in real-time for the purpose of analysis - locally or remotely. TrikeND-iGlobe provides the basis for the incorporation of modular tools that provide extended interactions with the data, including sub-setting, aggregation, re-shaping, time series analysis methods and animation to produce publication-quality imagery. TrikeND-iGlobe may be run locally or can be accessed via a web interface supported by high-performance visualization compute nodes placed close to the data. It supports visualizing heterogeneous data formats: traditional geospatial datasets along with scientific data sets with geographic coordinates (NetCDF, HDF, etc.). It also supports multiple data access mechanisms, including HTTP, FTP, WMS, WCS, and Thredds Data Server (for NetCDF data and for scientific data, TrikeND-iGlobe supports various visualization capabilities, including animations, vector field visualization, etc. TrikeND-iGlobe is a collaborative open-source project, contributors include NASA (ARC-PX), ORNL (Oakridge National Laboratories), Unidata, Kansas University, CSIRO CMAR Australia and Geoscience Australia.

  3. Global Land Data Assimilation System (GLDAS) Products from NASA Hydrology Data and Information Services Center (HDISC)

    NASA Technical Reports Server (NTRS)

    Fang, Hongliang; Hrubiak, Patricia; Kato, Hiroko; Rodell, Matthew; Teng, William L.; Vollmer, Bruce E.

    2008-01-01

    The Global Land Data Assimilation System (GLDAS) is generating a series of land surface state (e.g., soil moisture and surface temperature) and flux (e.g., evaporation and sensible heat flux) products simulated by four land surface models (CLM, Mosaic, Noah and VIC). These products are now accessible at the Hydrology Data and Information Services Center (HDISC), a component of the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Current data holdings include a set of 1.0 degree resolution data products from the four models, covering 1979 to the present; and a 0.25 degree data product from the Noah model, covering 2000 to the present. The products are in Gridded Binary (GRIB) format and can be accessed through a number of interfaces. New data formats (e.g., netCDF), temporal averaging and spatial subsetting will be available in the future. The HDISC has the capability to support more hydrology data products and more advanced analysis tools. The goal is to develop HDISC as a data and services portal that supports weather and climate forecast, and water and energy cycle research.

  4. SciSpark's SRDD : A Scientific Resilient Distributed Dataset for Multidimensional Data

    NASA Astrophysics Data System (ADS)

    Palamuttam, R. S.; Wilson, B. D.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; McGibbney, L. J.; Ramirez, P.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We have developed SciSpark, a robust Big Data framework, that extends ApacheTM Spark for scaling scientific computations. Apache Spark improves the map-reduce implementation in ApacheTM Hadoop for parallel computing on a cluster, by emphasizing in-memory computation, "spilling" to disk only as needed, and relying on lazy evaluation. Central to Spark is the Resilient Distributed Dataset (RDD), an in-memory distributed data structure that extends the functional paradigm provided by the Scala programming language. However, RDDs are ideal for tabular or unstructured data, and not for highly dimensional data. The SciSpark project introduces the Scientific Resilient Distributed Dataset (sRDD), a distributed-computing array structure which supports iterative scientific algorithms for multidimensional data. SciSpark processes data stored in NetCDF and HDF files by partitioning them across time or space and distributing the partitions among a cluster of compute nodes. We show usability and extensibility of SciSpark by implementing distributed algorithms for geospatial operations on large collections of multi-dimensional grids. In particular we address the problem of scaling an automated method for finding Mesoscale Convective Complexes. SciSpark provides a tensor interface to support the pluggability of different matrix libraries. We evaluate performance of the various matrix libraries in distributed pipelines, such as Nd4jTM and BreezeTM. We detail the architecture and design of SciSpark, our efforts to integrate climate science algorithms, parallel ingest and partitioning (sharding) of A-Train satellite observations from model grids. These solutions are encompassed in SciSpark, an open-source software framework for distributed computing on scientific data.

  5. netCDF Operators for Rapid Analysis of Measured and Modeled Swath-like Data

    NASA Astrophysics Data System (ADS)

    Zender, C. S.

    2015-12-01

    Swath-like data (hereafter SLD) are defined by non-rectangular and/or time-varying spatial grids in which one or more coordinates are multi-dimensional. It is often challenging and time-consuming to work with SLD, including all Level 2 satellite-retrieved data, non-rectangular subsets of Level 3 data, and model data on curvilinear grids. Researchers and data centers want user-friendly, fast, and powerful methods to specify, extract, serve, manipulate, and thus analyze, SLD. To meet these needs, large research-oriented agencies and modeling center such as NASA, DOE, and NOAA increasingly employ the netCDF Operators (NCO), an open-source scientific data analysis software package applicable to netCDF and HDF data. NCO includes extensive, fast, parallelized regridding features to facilitate analysis and intercomparison of SLD and model data. Remote sensing, weather and climate modeling and analysis communities face similar problems in handling SLD including how to easily: 1. Specify and mask irregular regions such as ocean basins and political boundaries in SLD (and rectangular) grids. 2. Bin, interpolate, average, or re-map SLD to regular grids. 3. Derive secondary data from given quality levels of SLD. These common tasks require a data extraction and analysis toolkit that is SLD-friendly and, like NCO, familiar in all these communities. With NCO users can 1. Quickly project SLD onto the most useful regular grids for intercomparison. 2. Access sophisticated statistical and regridding functions that are robust to missing data and allow easy specification of quality control metrics. These capabilities improve interoperability, software-reuse, and, because they apply to SLD, minimize transmission, storage, and handling of unwanted data. While SLD analysis still poses many challenges compared to regularly gridded, rectangular data, the custom analyses scripts SLD once required are now shorter, more powerful, and user-friendly.

  6. OceanSITES format and Ocean Observatory Output harmonisation: past, present and future

    NASA Astrophysics Data System (ADS)

    Pagnani, Maureen; Galbraith, Nan; Diggs, Stephen; Lankhorst, Matthias; Hidas, Marton; Lampitt, Richard

    2015-04-01

    The Global Ocean Observing System (GOOS) initiative was launched in 1991, and was the first step in creating a global view of ocean observations. In 1999 oceanographers at the OceanObs conference envisioned a 'global system of eulerian observatories' which evolved into the OceanSITES project. OceanSITES has been generously supported by individual oceanographic institutes and agencies across the globe, as well as by the WMO-IOC Joint Technical Commission for Oceanography and Marine Meteorology (under JCOMMOPS). The project is directed by the needs of research scientists, but has a strong data management component, with an international team developing content standards, metadata specifications, and NetCDF templates for many types of in situ oceanographic data. The OceanSITES NetCDF format specification is intended as a robust data exchange and archive format specifically for time-series observatory data from the deep ocean. First released in February 2006, it has evolved to build on and extend internationally recognised standards such as the Climate and Forecast (CF) standard, BODC vocabularies, ISO formats and vocabularies, and in version 1.3, released in 2014, ACDD (Attribute Convention for Dataset Discovery). The success of the OceanSITES format has inspired other observational groups, such as autonomous vehicles and ships of opportunity, to also use the format and today it is fulfilling the original concept of providing a coherent set of data from eurerian observatories. Data in the OceanSITES format is served by 2 Global Data Assembly Centres (GDACs), one at Coriolis, in France, at ftp://ftp.ifremer.fr/ifremer/oceansites/ and one at the US NDBC, at ftp://data.ndbc.noaa.gov/data/oceansites/. These two centres serve over 26,800 OceanSITES format data files from 93 moorings. The use of standardised and controlled features enables the files held at the OceanSITES GDACs to be electronically discoverable and ensures the widest access to the data. The OceanSITES initiative has always been truly international, and in Europe the first project to include OceanSITES as part of its outputs was ANIMATE(2002-2005), where 3 moorings and 5 partners shared equipment, methods and analysis effort and produced their final outputs in OceanSITES format. Subsequent European projects, MERSEA(2004-2008) and EuroSITES (2008-2011) built on that early success and the current European project FixO3 encompasses 23 moorings and 29 partners, all of whom are committed to producing data in OceanSITES format. The global OceanSITES partnership continues to grow; in 2014 the Australian Integrated Marine Observing System ( IMOS) started delivering data to the OceanSITES FTP, and files and India, South Korea and Japan are also active members of the OceanSITES community. As illustrated in figure 1 the OceanSITES sites cover the entire globe, and the format has now matured enough to be taken up by other user groups. GO-SHIP, a global, ship-based hydrographic program, shares technical management with OceanSITES through JCOMMOPS, and has its roots in WOCE Hydrography. This program complements OceanSITES and directly contributes to the mooring data holdings by providing repeated CTD and bottle profiles at specific locations. GO-SHIP hydrographic data adds a source of timeseries profiles and are provided in the OceanSITES file structure to facilitate full data interoperability. GO-SHIP has worked closely with the OceanSITES program, and this interaction has produced an unexpected side benefit - all data in the GO-SHIP database will be offered the robust and CF-compliant OceanSITES format beginning in 2015. The MyOcean European ocean monitoring and forecasting project has been in existence since 2009, and has successfully used the OceanSITES format as a unifying paradigm. MyOcean daily receives hundreds of data files from across Europe, and distributes the data from drifter buoys, moorings and tide gauges in OceanSITES format. These in-situ data are essential for both model verification points and for assimilation into the models. The use of the OceanSITES format now exceeds the hopes and expectations of the original OceanObs vision in 1999 and the stewardship of the format development, extension and documentation is in the expert care of the international OceanSITES Data Management Team. PIC Figure 1

  7. wsacrvpthrc.a1

    DOE Data Explorer

    Gaustad, Krista; Hardin, Joseph

    2015-12-14

    The wsacr PCM process executed by the sacr3 binary reads in wsacr.00 data and produces CF/Radial compliant NetCDF files for each of the radar operational scanning modes. This incorporates raw data from the radar, as well as scientifically important base derived parameters that affect interpretation of the data.

  8. wsacrppivh.a1

    DOE Data Explorer

    Gaustad, Krista; Hardin, Joseph

    2015-07-22

    The wsacr PCM process executed by the sacr3 binary reads in wsacr.00 data and produces CF/Radial compliant NetCDF files for each of the radar operational scanning modes. This incorporates raw data from the radar, as well as scientifically important base derived parameters that affect interpretation of the data.

  9. wsacrzrhiv.a1

    DOE Data Explorer

    Gaustad, Krista; Hardin, Joseph

    2015-07-22

    The wsacr PCM process executed by the sacr3 binary reads in wsacr.00 data and produces CF/Radial compliant NetCDF files for each of the radar operational scanning modes. This incorporates raw data from the radar, as well as scientifically important base derived parameters that affect interpretation of the data.

  10. kasacrvpthrc.a1

    DOE Data Explorer

    Gaustad, Krista; Hardin, Joseph

    2015-07-22

    The kasacr PCM process executed by the sacr3 binary reads in kasacr.00 data and produces CF/Radial compliant NetCDF files for each of the radar operational scanning modes. This incorporates raw data from the radar, as well as scientifically important base derived parameters that affect interpretation of the data.

  11. Can ASCII data files be standardized for Earth Science?

    NASA Astrophysics Data System (ADS)

    Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.

    2015-12-01

    NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.

  12. metAlignID: a high-throughput software tool set for automated detection of trace level contaminants in comprehensive LECO two-dimensional gas chromatography time-of-flight mass spectrometry data.

    PubMed

    Lommen, Arjen; van der Kamp, Henk J; Kools, Harrie J; van der Lee, Martijn K; van der Weg, Guido; Mol, Hans G J

    2012-11-09

    A new alternative data processing tool set, metAlignID, is developed for automated pre-processing and library-based identification and concentration estimation of target compounds after analysis by comprehensive two-dimensional gas chromatography with mass spectrometric detection. The tool set has been developed for and tested on LECO data. The software is developed to run multi-threaded (one thread per processor core) on a standard PC (personal computer) under different operating systems and is as such capable of processing multiple data sets simultaneously. Raw data files are converted into netCDF (network Common Data Form) format using a fast conversion tool. They are then preprocessed using previously developed algorithms originating from metAlign software. Next, the resulting reduced data files are searched against a user-composed library (derived from user or commercial NIST-compatible libraries) (NIST=National Institute of Standards and Technology) and the identified compounds, including an indicative concentration, are reported in Excel format. Data can be processed batch wise. The overall time needed for conversion together with processing and searching of 30 raw data sets for 560 compounds is routinely within an hour. The screening performance is evaluated for detection of pesticides and contaminants in raw data obtained after analysis of soil and plant samples. Results are compared to the existing data-handling routine based on proprietary software (LECO, ChromaTOF). The developed software tool set, which is freely downloadable at www.metalign.nl, greatly accelerates data-analysis and offers more options for fine-tuning automated identification toward specific application needs. The quality of the results obtained is slightly better than the standard processing and also adds a quantitative estimate. The software tool set in combination with two-dimensional gas chromatography coupled to time-of-flight mass spectrometry shows great potential as a highly-automated and fast multi-residue instrumental screening method. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  14. A comparison of data interoperability approaches of fusion codes with application to synthetic diagnostics

    NASA Astrophysics Data System (ADS)

    Kruger, Scott; Shasharina, S.; Vadlamani, S.; McCune, D.; Holland, C.; Jenkins, T. G.; Candy, J.; Cary, J. R.; Hakim, A.; Miah, M.; Pletzer, A.

    2010-11-01

    As various efforts to integrate fusion codes proceed worldwide, standards for sharing data have emerged. In the U.S., the SWIM project has pioneered the development of the Plasma State, which has a flat-hierarchy and is dominated by its use within 1.5D transport codes. The European Integrated Tokamak Modeling effort has developed a more ambitious data interoperability effort organized around the concept of Consistent Physical Objects (CPOs). CPOs have deep hierarchies as needed by an effort that seeks to encompass all of fusion computing. Here, we discuss ideas for implementing data interoperability that is complementary to both the Plasma State and CPOs. By making use of attributes within the netcdf and HDF5 binary file formats, the goals of data interoperability can be achieved with a more informal approach. In addition, a file can be simultaneously interoperable to several standards at once. As an illustration of this approach, we discuss its application to the development of synthetic diagnostics that can be used for multiple codes.

  15. National Centers for Environmental Prediction

    Science.gov Websites

    Modeling Mesoscale Modeling Marine Modeling and Analysis Teams Climate Data Assimilation Ensembles and Post Chuang (POST) Fanglin Yang (VSDB) Perry Shafran (VERIFICATION) Ilya Rivin (HYCOM) David Behringer (MOM4 * Functional Equivalence test for MOM4p0 on GAEA - Dave Behringer * NCEP Gaea module - $NETCDF * Use a forum

  16. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    NASA Astrophysics Data System (ADS)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  17. Acoustic Doppler Current Profiler Data Processing System manual [ADCP

    USGS Publications Warehouse

    Cote, Jessica M.; Hotchkiss, Frances S.; Martini, Marinna A.; Denham, Charles R.; revisions by Ramsey, Andree L.; Ruane, Stephen

    2000-01-01

    This open-file report describes the data processing software currently in use by the U.S. Geological Survey (USGS), Woods Hole Coastal and Marine Science Center (WHCMSC), to process time series of acoustic Doppler current data obtained by Teledyne RD Instruments Workhorse model ADCPs. The Sediment Transport Instrumentation Group (STG) at the WHCMSC has a long-standing commitment to providing scientists high quality oceanographic data published in a timely manner. To meet this commitment, STG has created this software to aid personnel in processing and reviewing data as well as evaluating hardware for signs of instrument malfunction. The output data format for the data is network Common Data Form (netCDF), which meets USGS publication standards. Typically, ADCP data are recorded in beam coordinates. This conforms to the USGS philosophy to post-process rather than internally process data. By preserving the original data quality indicators as well as the initial data set, data can be evaluated and reprocessed for different types of analyses. Beam coordinate data are desirable for internal and surface wave experiments, for example. All the code in this software package is intended to run using the MATLAB program available from The Mathworks, Inc. As such, it is platform independent and can be adapted by the USGS and others for specialized experiments with non-standard requirements. The software is continuously being updated and revised as improvements are required. The most recent revision may be downloaded from: http://woodshole.er.usgs.gov/operations/stg/Pubs/ADCPtools/adcp_index.htm The USGS makes this software available at the user?s discretion and responsibility.

  18. Climate Data Guide - Modern Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2)

    NASA Technical Reports Server (NTRS)

    Cullather, Richard; Bosilovich, Michael

    2017-01-01

    The Modern-Era Retrospective analysis for Research and Applications, version 2 (MERRA-2) is a global atmospheric reanalysis produced by the NASA Global Modeling and Assimilation Office (GMAO). It spans the satellite observing era from 1980 to the present. The goals of MERRA-2 are to provide a regularly-gridded, homogeneous record of the global atmosphere, and to incorporate additional aspects of the climate system including trace gas constituents (stratospheric ozone), and improved land surface representation, and cryospheric processes. MERRA-2 is also the first satellite-era global reanalysis to assimilate space-based observations of aerosols and represent their interactions with other physical processes in the climate system. The inclusion of these additional components are consistent with the overall objectives of an Integrated Earth System Analysis (IESA). MERRA-2 is intended to replace the original MERRA product, and reflects recent advances in atmospheric modeling and data assimilation. Modern hyperspectral radiance and microwave observations, along with GPS-Radio Occultation and NASA ozone datasets are now assimilated in MERRA-2. Much of the structure of the data files remains the same in MERRA-2. While the original MERRA data format was HDF-EOS, the MERRA-2 supplied binary data format is now NetCDF4 (with lossy compression to save space).

  19. ReOBS: a new approach to synthesize long-term multi-variable dataset and application to the SIRTA supersite

    NASA Astrophysics Data System (ADS)

    Chiriaco, Marjolaine; Dupont, Jean-Charles; Bastin, Sophie; Badosa, Jordi; Lopez, Julio; Haeffelin, Martial; Chepfer, Helene; Guzman, Rodrigo

    2018-05-01

    A scientific approach is presented to aggregate and harmonize a set of 60 geophysical variables at hourly timescale over a decade, and to allow multiannual and multi-variable studies combining atmospheric dynamics and thermodynamics, radiation, clouds and aerosols from ground-based observations. Many datasets from ground-based observations are currently in use worldwide. They are very valuable because they contain complete and precise information due to their spatio-temporal co-localization over more than a decade. These datasets, in particular the synergy between different type of observations, are under-used because of their complexity and diversity due to calibration, quality control, treatment, format, temporal averaging, metadata, etc. Two main results are presented in this article: (1) a set of methods available for the community to robustly and reliably process ground-based data at an hourly timescale over a decade is described and (2) a single netCDF file is provided based on the SIRTA supersite observations. This file contains approximately 60 geophysical variables (atmospheric and in ground) hourly averaged over a decade for the longest variables. The netCDF file is available and easy to use for the community. In this article, observations are re-analyzed. The prefix re refers to six main steps: calibration, quality control, treatment, hourly averaging, homogenization of the formats and associated metadata, as well as expertise on more than a decade of observations. In contrast, previous studies (i) took only some of these six steps into account for each variable, (ii) did not aggregate all variables together in a single file and (iii) did not offer an hourly resolution for about 60 variables over a decade (for the longest variables). The approach described in this article can be applied to different supersites and to additional variables. The main implication of this work is that complex atmospheric observations are made readily available for scientists who are non-experts in measurements. The dataset from SIRTA observations can be downloaded at http://sirta.ipsl.fr/reobs.html (last access: April 2017) (Downloads tab, no password required) under https://doi.org/10.14768/4F63BAD4-E6AF-4101-AD5A-61D4A34620DE.

  20. FLASH_SSF_Aqua-FM3-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  1. FLASH_SSF_Terra-FM1-MODIS_Version3C

    Atmospheric Science Data Center

    2018-04-04

    ... Tool:  CERES Order Tool  (netCDF) Subset Data:  CERES Search and Subset Tool (HDF4 & netCDF) ... Cloud Layer Area Cloud Infrared Emissivity Cloud Base Pressure Surface (Radiative) Flux TOA Flux Surface Types TOT ... Radiance SW Filtered Radiance LW Flux Order Data:  Earthdata Search:  Order Data Guide Documents:  ...

  2. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data Via Hyrax Server Data Server

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Gallagher, James; Jelenak, Aleksandar; Potter, Nathan; Lee, Joe; Yang, Kent

    2017-01-01

    This study explored three candidate architectures with different types of objects and access paths for serving NASA Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS). We studied the cost and performance for each architecture using several representative Use-Cases. The objectives of the study were: Conduct a trade study to identify one or more high performance integrated solutions for storing and retrieving NASA HDF5 and netCDF4 data in a cloud (web object store) environment. The target environment is Amazon Web Services (AWS) Simple Storage Service (S3). Conduct needed level of software development to properly evaluate solutions in the trade study and to obtain required benchmarking metrics for input into government decision of potential follow-on prototyping. Develop a cloud cost model for the preferred data storage solution (or solutions) that accounts for different granulation and aggregation schemes as well as cost and performance trades.We will describe the three architectures and the use cases along with performance results and recommendations for further work.

  3. MyOcean Internal Information System (Dial-P)

    NASA Astrophysics Data System (ADS)

    Blanc, Frederique; Jolibois, Tony; Loubrieu, Thomas; Manzella, Giuseppe; Mazzetti, Paolo; Nativi, Stefano

    2010-05-01

    MyOcean is a three-year project (2008-2011) which goal is the development and pre-operational validation of the GMES Marine Core Service for ocean monitoring and forecasting. It's a transition project that will conduct the European "operational oceanography" community towards the operational phase of a GMES European service, which demands more European integration, more operationality, and more service. Observations, model-based data, and added-value products will be generated - and enhanced thanks to dedicated expertise - by the following production units: • Five Thematic Assembly Centers, each of them dealing with a specific set of observation data: Sea Level, Ocean colour, Sea Surface Temperature, Sea Ice & Wind, and In Situ data, • Seven Monitoring and Forecasting Centers to serve the Global Ocean, the Arctic area, the Baltic Sea, the Atlantic North-West shelves area, the Atlantic Iberian-Biscay-Ireland area, the Mediterranean Sea and the Black sea. Intermediate and final users will discover, view and get the products by means of a central web desk, a central re-active manned service desk and thematic experts distributed across Europe. The MyOcean Information System (MIS) is considering the various aspects of an interoperable - federated information system. Data models support data and computer systems by providing the definition and format of data. The possibility of including the information in the data file is depending on data model adopted. In general there is little effort in the actual project to develop a ‘generic' data model. A strong push to develop a common model is provided by the EU Directive INSPIRE. At present, there is no single de-facto data format for storing observational data. Data formats are still evolving, with their underlying data models moving towards the concept of Feature Types based on ISO/TC211 standards. For example, Unidata are developing the Common Data Model that can represent scientific data types such as point, trajectory, station, grid, etc., which will be implemented in netCDF format. SeaDataNet is recommending ODV and NetCDF formats. Another problem related to data curation and interoperability is the possibility to use common vocabularies. Common vocabularies are developed in many international initiatives, such as GEMET (promoted by INSPIRE as a multilingual thesaurus), UNIDATA, SeaDataNet, Marine Metadata Initiative (MMI). MIS is considering the SeaDataNet vocabulary as a base for interoperability. Four layers of different abstraction levels of interoperability an be defined: - Technical/basic: this layer is implemented at each TAC or MFC through internet connection and basic services for data transfer and browsing (e.g FTP, HTTP, etc). - Syntactic: allowing the interchange of metadata and protocol elements. This layer corresponds to a definition Core Metadata Set, the format of exchange/delivery for the data and associated metadata and possible software. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant metadata set and common data formats). - Functional/pragmatic: based on a common set of functional primitives or on a common set of service definitions. This layer refers to the definition of services based on Web services standards. This layer is implemented by the DIAL-P logical interface (e.g. adoption of INSPIRE compliant network services). - Semantic: allowing to access similar classes of objects and services across multiple sites, with multilinguality of content as one specific aspect. This layer corresponds to MIS interface, terminology and thesaurus. Given the above requirements, the proposed solution is a federation of systems, where the individual participants are self-contained autonomous systems, but together form a consistent wider picture. A mid-tier integration layer mediates between existing systems, adapting their data and service model schema to the MIS. The developed MIS is a read-only system, i.e. does not allow updating (or inserting) data into the participant resource systems. The main advantages of the proposed approach are: • to enable information sources to join the MIS and publish their data and metadata in a secure way, without any modification to their existing resources and procedures and without any restriction to their autonomy; • to enable users to browse and query the MIS, receiving an aggregated result incorporating relevant data and metadata from across different sources; • to accommodate the growth of such a MIS, either in terms of its clients or of its information resources, as well as the evolution of the underlying data model.

  4. SAR Altimetry Processing on Demand Service for CryoSat-2 and Sentinel-3 at ESA G-POD

    NASA Astrophysics Data System (ADS)

    Dinardo, Salvatore; Lucas, Bruno; Benveniste, Jerome

    2015-12-01

    The scope of this work is to feature the new ESA service (SARvatore) for the exploitation of the CryoSat-2 data, designed and developed entirely by the Altimetry Team at ESA-ESRIN EOP-SER (Earth Observation - Exploitation, Research and Development). The G-POD Service, SARvatore (SAR Versatile Altimetric Toolkit for Ocean Research & Exploitation) for CryoSat-2, is a web platform that provides the capability to process on-line and on-demand CryoSat-2 SAR/SARIN data, from L1a (FBR) data products until SAR/SARIN Level-2 geophysical data products.. The Processor will make use of the G-POD (Grid-Processing On Demand) distributed computing platform to deliver timely the output data products. These output data products are generated in standard NetCDF format (using CF Convention), and they are compatible with BRAT (Basic Radar Altimetry Toolbox) and other NetCDF tool. Using the G-POD graphic interface, it is easy to select the geographical area of interest along with the time-frame of interest, based on the Cryosat-2 SAR/SARIN FBR data products availability in the service's catalogue. After the task submission, the users can follow, in real time, the status of the processing task. The processor prototype is versatile in the sense that the users can customize and adapt the processing, according their specific requirements, setting a list of configurable options. The processing service is meant to be used for research & development experiments, to support the development contracts awarded confronting the deliverables to ESA, on site demonstrations/training in training courses and workshops, cross-comparison against third party products (CLS/CNES CPP Products for instance), preparation for the Sentinel-3 Topographic mission, producing data and graphics for publications, etc. So far, the processing has been designed and optimized for open ocean studies and is fully functional only over this kind of surface but there are plans to augment this processing capacity over coastal zone, inland water and over land in view of maximizing the exploitation of the upcoming Sentinel-3 Topographic mission over all surfaces. The service is open and free of charge.

  5. A data delivery system for IMOS, the Australian Integrated Marine Observing System

    NASA Astrophysics Data System (ADS)

    Proctor, R.; Roberts, K.; Ward, B. J.

    2010-09-01

    The Integrated Marine Observing System (IMOS, www.imos.org.au), an AUD 150 m 7-year project (2007-2013), is a distributed set of equipment and data-information services which, among many applications, collectively contribute to meeting the needs of marine climate research in Australia. The observing system provides data in the open oceans around Australia out to a few thousand kilometres as well as the coastal oceans through 11 facilities which effectively observe and measure the 4-dimensional ocean variability, and the physical and biological response of coastal and shelf seas around Australia. Through a national science rationale IMOS is organized as five regional nodes (Western Australia - WAIMOS, South Australian - SAIMOS, Tasmania - TASIMOS, New SouthWales - NSWIMOS and Queensland - QIMOS) surrounded by an oceanic node (Blue Water and Climate). Operationally IMOS is organized as 11 facilities (Argo Australia, Ships of Opportunity, Southern Ocean Automated Time Series Observations, Australian National Facility for Ocean Gliders, Autonomous Underwater Vehicle Facility, Australian National Mooring Network, Australian Coastal Ocean Radar Network, Australian Acoustic Tagging and Monitoring System, Facility for Automated Intelligent Monitoring of Marine Systems, eMarine Information Infrastructure and Satellite Remote Sensing) delivering data. IMOS data is freely available to the public. The data, a combination of near real-time and delayed mode, are made available to researchers through the electronic Marine Information Infrastructure (eMII). eMII utilises the Australian Academic Research Network (AARNET) to support a distributed database on OPeNDAP/THREDDS servers hosted by regional computing centres. IMOS instruments are described through the OGC Specification SensorML and where-ever possible data is in CF compliant netCDF format. Metadata, conforming to standard ISO 19115, is automatically harvested from the netCDF files and the metadata records catalogued in the OGC GeoNetwork Metadata Entry and Search Tool (MEST). Data discovery, access and download occur via web services through the IMOS Ocean Portal (http://imos.aodn.org.au) and tools for the display and integration of near real-time data are in development.

  6. SAR Altimetry Processing on Demand Service for CryoSat-2 and Sentinel-3 at ESA G-POD

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Dinardo, S.; Lucas, B.

    2014-12-01

    The scope of this work is to show the new ESA service (SARvatore) for the exploitation of the CryoSat-2 data and upcoming Sentinel-3 data, designed and developed entirely by the Altimetry Team at ESRIN EOP-SER. The G-POD (Grid-Processing On Demand) Service, SARvatore (SAR Versatile Altimetric Toolkit for Ocean Research & Exploitation) for CryoSat-2, is a web platform that provides the capability to process on-line and on demand CryoSat-2 SAR data, starting from L1a (FBR) data up to SAR Level-2 geophysical data products.The service is based on SARvatore Processor Prototype and it The output data products are generated in standard NetCDF format (using CF Convention), and they are compatible with BRAT (Basic Radar Altimety Toolbox) and its successor, the up-coming Sentinel-3 Altimetry Toolbox and other NetCDF tools.Using the G-POD graphic interface, it is possible to easily select the geographical area of interest along with the time of interest. As of August 2014 the service allows the user to select data for most of 2013 and part of 2014, no geographical restriction on this data. It is expected that before Fall 2014 all the mission (when available) will be at the disposal of the users.The processor prototype is versatile in the sense that the users can customize and adapt the processing, according their specific requirements, setting a list of configurable options..The processing service is meant to be used for research & development scopes, supporting the development contracts, on site demonstrations/training to selected users, cross-comparison against third part products, preparation to Sentinel-3 mission, publications, etc.So far, the processing has been designed and optimized for open ocean studies and is fully functional only over this kind of surface but there are plans to augment this processing capacity over coastal zones, inland waters and over land in sight of maximizing the exploitation of the upcoming Sentinel-3 Topographic mission over all surfaces.

  7. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Mattmann, C. A.; Waliser, D. E.; Kim, J.; Loikith, P.; Lee, H.; McGibbney, L. J.; Whitehall, K. D.

    2014-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark. Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk, and makes iterative algorithms feasible. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 100 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning (ML) based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. The goals of SciSpark are to: (1) Decrease the time to compute comparison statistics and plots from minutes to seconds; (2) Allow for interactive exploration of time-series properties over seasons and years; (3) Decrease the time for satellite data ingestion into RCMES to hours; (4) Allow for Level-2 comparisons with higher-order statistics or PDF's in minutes to hours; and (5) Move RCMES into a near real time decision-making platform. We will report on: the architecture and design of SciSpark, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning (sharding) of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  8. A web portal for accessing, viewing and comparing in situ observations, EO products and model output data

    NASA Astrophysics Data System (ADS)

    Vines, Aleksander; Hamre, Torill; Lygre, Kjetil

    2014-05-01

    The GreenSeas project (Development of global plankton data base and model system for eco-climate early warning) aims to advance the knowledge and predictive capacities of how marine ecosystems will respond to global change. A main task has been to set up a data delivery and monitoring core service following the open and free data access policy implemented in the Global Monitoring for the Environment and Security (GMES) programme. A key feature of the system is its ability to compare data from different datasets, including an option to upload one's own netCDF files. The user can for example search in an in situ database for different variables (like temperature, salinity, different elements, light, specific plankton types or rate measurements) with different criteria (bounding box, date/time, depth, Longhurst region, cruise/transect) and compare the data with model data. The user can choose model data or Earth observation data from a list, or upload his/her own netCDF files to use in the comparison. The data can be visualized on a map, as graphs and plots (e.g. time series and property-property plots), or downloaded in various formats. The aim is to ensure open and free access to historical plankton data, new data (EO products and in situ measurements), model data (including estimates of simulation error) and biological, environmental and climatic indicators to a range of stakeholders, such as scientists, policy makers and environmental managers. We have implemented a web-based GIS(Geographical Information Systems) system and want to demonstrate the use of this. The tool is designed for a wide range of users: Novice users, who want a simple way to be able to get basic information about the current state of the marine planktonic ecosystem by utilizing predefined queries and comparisons with models. Intermediate level users who want to explore the database on their own and customize the prefedined setups. Advanced users who want to perform complex queries and inventory searching and compare the data in their own way or with their own models.

  9. A User's Guide to the Tsunami Datasets at NOAA's National Data Buoy Center

    NASA Astrophysics Data System (ADS)

    Bouchard, R. H.; O'Neil, K.; Grissom, K.; Garcia, M.; Bernard, L. J.; Kern, K. J.

    2013-12-01

    The National Data Buoy Center (NDBC) has maintained and operated the National Oceanic and Atmospheric Administration's (NOAA) tsunameter network since 2003. The tsunameters employ the NOAA-developed Deep-ocean Assessment and Reporting of Tsunamis (DART) technology. The technology measures the pressure and temperature every 15 seconds on the ocean floor and transforms them into equivalent water-column height observations. A complex series of subsampled observations are transmitted acoustically in real-time to a moored buoy or marine autonomous vehicle (MAV) at the ocean surface. The surface platform uses its satellite communications to relay the observations to NDBC. NDBC places the observations onto the Global Telecommunication System (GTS) for relay to NOAA's Tsunami Warning Centers (TWC) in Hawai'i and Alaska and to the international community. It takes less than three minutes to speed the observations from the ocean floor to the TWCs. NDBC can retrieve limited amounts of the 15-s measurements from the instrumentation on the ocean floor using the technology's two-way communications. NDBC recovers the full resolution 15-s measurements about every 2 years and forwards the datasets and metadata to the National Geophysical Data Center for permanent archive. Meanwhile, NDBC retains the real-time observations on its website. The type of real-time observation depends on the operating mode of the tsunameter. NDBC provides the observations in a variety of traditional and innovative methods and formats that include descriptors of the operating mode. Datasets, organized by station, are available from the NDBC website as text files and from the NDBC THREDDS server in netCDF format. The website provides alerts and lists of events that allow users to focus on the information relevant for tsunami hazard analysis. In addition, NDBC developed a basic web service to query station information and observations to support the Short-term Inundation Forecasting for Tsunamis (SIFT) model. NDBC and NOAA's Integrated Ocean Observing System have fielded the innovative Sensor Observation Service (SOS) that allows users access to observations by station, or groups of stations that have been organized into Features of Interest, such as the 2011 Honshu Tsunami. The user can elect to receive the SOS observations in several different formats, such as Sensor Web Enablement (SWE) or delimiter-separated values. Recently, NDBC's Coastal and Offshore Buoys provided meteorological observations used in analyzing possible meteotsunamis on the U.S. East Coast. However, many of these observations are some distance away from the tsunameters. In a demonstration project, NDBC has added sensors to a tsunameter's surface buoy and a MAV to support program requirements for meteorological observations. All these observations are available from NDBC's website in text files, netCDF, and SOS. To aid users in obtaining information relevant to their applications, the presentation documents, in detail, the characteristics of the different types of real-time observations and the availability and organization of the resulting datasets at NDBC .

  10. Tackling the 2nd V: Big Data, Variety and the Need for Representation Consistency

    NASA Astrophysics Data System (ADS)

    Clune, T.; Kuo, K. S.

    2016-12-01

    While Big Data technologies are transforming our ability to analyze ever larger volumes of Earth science data, practical constraints continue to limit our ability to compare data across datasets from different sources in an efficient and robust manner. Within a single data collection, invariants such as file format, grid type, and spatial resolution greatly simplify many types of analysis (often implicitly). However, when analysis combines data across multiple data collections, researchers are generally required to implement data transformations (i.e., "data preparation") to provide appropriate invariants. These transformation include changing of file formats, ingesting into a database, and/or regridding to a common spatial representation, and they can either be performed once, statically, or each time the data is accessed. At the very least, this process is inefficient from the perspective of the community as each team selects its own representation and privately implements the appropriate transformations. No doubt there are disadvantages to any "universal" representation, but we posit that major benefits would be obtained if a suitably flexible spatial representation could be standardized along with tools for transforming to/from that representation. We regard this as part of the historic trend in data publishing. Early datasets used ad hoc formats and lacked metadata. As better tools evolved, published data began to use standardized formats (e.g., HDF and netCDF) with attached metadata. We propose that the modern need to perform analysis across data sets should drive a new generation of tools that support a standardized spatial representation. More specifically, we propose the hierarchical triangular mesh (HTM) as a suitable "generic" resolution that permits standard transformations to/from native representations in use today, as well as tools to convert/regrid existing datasets onto that representation.

  11. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1992-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  12. Experimenter's laboratory for visualized interactive science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.

    1993-01-01

    The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.

  13. The Massachusetts Bay internal wave experiment, August 1998: data report

    USGS Publications Warehouse

    Butman, Bradford; Alexander, P. Soupy; Anderson, Steven P.; Lightsom, Frances L.; Scotti, Alberto; Beardsley, Robert C.

    2006-01-01

    This data report presents oceanographic observations made in Massachusetts Bay (fig. 1) in August 1998 as part of the Massachusetts Bay Internal Wave Experiment (MBIWE98). MBIWE98 was carried out to characterize large-amplitude internal waves in Massachusetts Bay and to investigate the possible resuspension and transport of bottom sediments caused by these waves. This data report presents a description of the field program and instrumentation, an overview of the data through summary plots and statistics, and the time-series data in NetCDF format. The objective of this report is to make the data available in digital form and to provide summary plots and statistics to facilitate browsing of the data set. The existence of large-amplitude internal waves in Massachusetts Bay was first described by Halpern (1971). In summer when the water is stratified, packets of waves propagate westward into the bay on the flood (westward flowing) tide at about 0.5 m/s. The internal waves are observed in packets of 5-10 waves, have periods of 5-10 minutes and wavelengths of 200-400 m, and cause downward excursions of the thermocline of as much as 30 m. The waves are generated by interaction of the barotropic tide with Stellwagen Bank (Haury and others (1979). Several papers present analyses and interpretations of the data collected during the MBIWE98. Grosenbaugh and others (2002) report on the results of the horizontal array, Scotti and others (2005) describe a strategy for processing observations made by Acoustic Doppler Current Profilers (ADCPs) in the presence of short-wavelength internal waves, Butman and others (in press) describe the effect of these waves on sediment transport, and Scotti and others (in press) describe the energetics of the internal waves.

  14. The iMeteo is a web-based weather visualization tool

    NASA Astrophysics Data System (ADS)

    Tuni San-Martín, Max; San-Martín, Daniel; Cofiño, Antonio S.

    2010-05-01

    iMeteo is a web-based weather visualization tool. Designed with an extensible J2EE architecture, it is capable of displaying information from heterogeneous data sources such as gridded data from numerical models (in NetCDF format) or databases of local predictions. All this information is presented in a user-friendly way, being able to choose the specific tool to display data (maps, graphs, information tables) and customize it to desired locations. *Modular Display System* Visualization of the data is achieved through a set of mini tools called widgets. A user can add them at will and arrange them around the screen easily with a drag and drop movement. They can be of various types and each can be configured separately, forming a really powerful and configurable system. The "Map" is the most complex widget, since it can show several variables simultaneously (either gridded or point-based) through a layered display. Other useful widgets are the the "Histogram", which generates a graph with the frequency characteristics of a variable and the "Timeline" which shows the time evolution of a variable at a given location in an interactive way. *Customization and security* Following the trends in web development, the user can easily customize the way data is displayed. Due to programming in client side with technologies like AJAX, the interaction with the application is similar to the desktop ones because there are rapid respone times. If a user is registered then he could also save his settings in the database, allowing access from any system with Internet access with his particular setup. There is particular emphasis on application security. The administrator can define a set of user profiles, which may have associated restrictions on access to certain data sources, geographic areas or time intervals.

  15. CryoSat Ice Processor: Known Processor Anomalies and Potential Future Product Evolutions

    NASA Astrophysics Data System (ADS)

    Mannan, R.; Webb, E.; Hall, A.; Bouffard, J.; Femenias, P.; Parrinello, T.; Bouffard, J.; Brockley, D.; Baker, S.; Scagliola, M.; Urien, S.

    2016-08-01

    Launched in 2010, CryoSat was designed to measure changes in polar sea ice thickness and ice sheet elevation. To reach this goal the CryoSat data products have to meet the highest performance standards and are subjected to a continual cycle of improvement achieved through upgrades to the Instrument Processing Facilities (IPFs). Following the switch to the Baseline-C Ice IPFs there are already planned evolutions for the next processing Baseline, based on recommendations from the Scientific Community, Expert Support Laboratory (ESL), Quality Control (QC) Centres and Validation campaigns. Some of the proposed evolutions, to be discussed with the scientific community, include the activation of freeboard computation in SARin mode, the potential operation of SARin mode over flat-to-slope transitory land ice areas, further tuning of the land ice retracker, the switch to NetCDF format and the resolution of anomalies arising in Baseline-C. This paper describes some of the anomalies known to affect Baseline-C in addition to potential evolutions that are planned and foreseen for Baseline-D.

  16. HDF-EOS 5 Validator

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program partly automates the task of determining whether an HDF-EOS 5 file is valid in that it conforms to specifications for such characteristics as attribute names, dimensionality of data products, and ranges of legal data values. ["HDF-EOS" and variants thereof are defined in "Converting EOS Data From HDF-EOS to netCDF" (GSC-15007-1), which is the first of several preceding articles in this issue of NASA Tech Briefs.] Previously, validity of a file was determined in a tedious and error-prone process in which a person examined human-readable dumps of data-file-format information. The present software helps a user to encode the specifications for an HDFEOS 5 file, and then inspects the file for conformity with the specifications: First, the user writes the specifications in Extensible Markup Language (XML) by use of a document type definition (DTD) that is part of the program. Next, the portion of the program (denoted the validator) that performs the inspection is executed, using, as inputs, the specifications in XML and the HDF-EOS 5 file to be validated. Finally, the user examines the output of the validator.

  17. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    NASA Astrophysics Data System (ADS)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  18. Atmospheric Radiation Measurement's Data Management Facility captures metadata and uses visualization tools to assist in routine data management.

    NASA Astrophysics Data System (ADS)

    Keck, N. N.; Macduff, M.; Martin, T.

    2017-12-01

    The Atmospheric Radiation Measurement's (ARM) Data Management Facility (DMF) plays a critical support role in processing and curating data generated by the Department of Energy's ARM Program. Data are collected near real time from hundreds of observational instruments spread out all over the globe. Data are then ingested hourly to provide time series data in NetCDF (network Common Data Format) and includes standardized metadata. Based on automated processes and a variety of user reviews the data may need to be reprocessed. Final data sets are then stored and accessed by users through the ARM Archive. Over the course of 20 years, a suite of data visualization tools have been developed to facilitate the operational processes to manage and maintain the more than 18,000 real time events, that move 1.3 TB of data each day through the various stages of the DMF's data system. This poster will present the resources and methodology used to capture metadata and the tools that assist in routine data management and discoverability.

  19. Simplifying the Analysis of Data from Multiple Heliophysics Instruments and Missions

    NASA Astrophysics Data System (ADS)

    Bazell, D.; Vandegriff, J. D.

    2014-12-01

    Understanding the intertwined plasma, particles and fields connecting the Sun and the Earth requires combining data from many diverse sources, but there are still many technological barriers that complicate the merging of data from different instruments and missions. We present an emerging data serving capability that provides a uniform way to access heterogeneous and distributed data. The goal of our data server is to provide a standardized data access mechanism that is identical for data of any format and layout (CDF, custom binary, FITS, netCDF, CSV and other flavors of ASCII, etc). Data remain in their original format and location (i.e., at instrument team sites or existing data centers), and our data server delivers a dynamically reformatted view of the data. Scientists can then use tools (clients that talk to the server) that offer a single interface for browsing, analyzing or downloading many different contemporary and legacy heliophysics data sets. Our current server accesses many CDF data resources at CDAWeb, as well as multiple other instrument team sites. Our webservice will be deployed on the Amazon Cloud at http://datashop.elasticbeanstalk.com/. Two basic clients will also be demonstrated: one in Java and one in IDL. Python, Perl, and Matlab clients are also planned. Complex missions such as Solar Orbiter and Solar Probe Plus will benefit greatly from tools that enable multi-instrument and multi-mission data comparison.

  20. The Aegean Sea marine security decision support system

    NASA Astrophysics Data System (ADS)

    Perivoliotis, L.; Krokos, G.; Nittis, K.; Korres, G.

    2011-05-01

    As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System) project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON), into an active element of the European Decision Support System (EuroDeSS). The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.

  1. The Aegean sea marine security decision support system

    NASA Astrophysics Data System (ADS)

    Perivoliotis, L.; Krokos, G.; Nittis, K.; Korres, G.

    2011-10-01

    As part of the integrated ECOOP (European Coastal Sea Operational observing and Forecasting System) project, HCMR upgraded the already existing standalone Oil Spill Forecasting System for the Aegean Sea, initially developed for the Greek Operational Oceanography System (POSEIDON), into an active element of the European Decision Support System (EuroDeSS). The system is accessible through a user friendly web interface where the case scenarios can be fed into the oil spill drift model component, while the synthetic output contains detailed information about the distribution of oil spill particles and the oil spill budget and it is provided both in text based ECOOP common output format and as a series of sequential graphics. The main development steps that were necessary for this transition were the modification of the forcing input data module in order to allow the import of other system products which are usually provided in standard formats such as NetCDF and the transformation of the model's calculation routines to allow use of current, density and diffusivities data in z instead of sigma coordinates. During the implementation of the Aegean DeSS, the system was used in operational mode in order to support the Greek marine authorities in handling a real accident that took place in North Aegean area. Furthermore, the introduction of common input and output files by all the partners of EuroDeSS extended the system's interoperability thus facilitating data exchanges and comparison experiments.

  2. Long-Term Oceanographic Observations in Western Massachusetts Bay Offshore of Boston, Massachusetts: Data Report for 1989-2002

    USGS Publications Warehouse

    Butman, Bradford; Bothner, Michael H.; Alexander, P. Soupy; Lightsom, Frances L.; Martini, Marinna A.; Gutierrez, Benjamin T.; Strahle, William S.

    2004-01-01

    This data report presents long-term oceanographic observations made in western Massachusetts Bay at two locations: (1) 42 deg 22.6' N., 70 deg 47.0' W. (Site A, 33 m water depth) from December 1989 through December 2002 (figure 1), and (2) 42 deg 9.8' N., 70 deg 38.4' W. (Site B, 21 m water depth) from October 1997 through December 2002. Site A is approximately 1 km south of the new ocean outfall that began discharging treated sewage effluent from the Boston metropolitan area into Massachusetts Bay on September 6, 2000. These long-term oceanographic observations have been collected by the U.S. Geological Survey (USGS) in partnership with the Massachusetts Water Resources Authority (MWRA) and with logistical support from the U.S. Coast Guard (USCG - http://www.uscg.mil). This report presents time series data through December 2002, updating a similar report that presented data through December 2000 (Butman and others, 2002). In addition, the Statistics and Mean Flow sections include some new plots and tables and the format of the report has been streamlined by combining yearly figures into single .pdfs. Figure 1 (PDF format) The long-term measurements are planned to continue at least through 2005. The long-term oceanographic observations at Sites A and B are part of a USGS study designed to understand the transport and long-term fate of sediments and associated contaminants in the Massachusetts bays. (See http://woodshole.er.usgs.gov/project-pages/bostonharbor/ and Butman and Bothner, 1997.) The long-term observations document seasonal and inter-annual changes in currents, hydrography, and suspended-matter concentration in western Massachusetts Bay, and the importance of infrequent catastrophic events, such as major storms or hurricanes, in sediment resuspension and transport. They also provide observations for testing numerical models of circulation. This data report presents a description of the field program and instrumentation, an overview of the data through summary plots and statistics, and the data in NetCDF and ASCII format for the period December 1989 through December 2002 for Site A and October 1997 through December 2002 for Site B. The objective of this report is to make the data available in digital form and to provide summary plots and statistics to facilitate browsing of the long-term data set.

  3. Psyplot: Visualizing rectangular and triangular Climate Model Data with Python

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp

    2016-04-01

    The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.

  4. Customer-oriented Data Formats and Services for Global Land Data Assimilation System (GLDAS) Products at the NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Fang, H.; Kato, H.; Rodell, M.; Teng, W. L.; Vollmer, B. E.

    2008-12-01

    The Global Land Data Assimilation System (GLDAS) has been generating a series of land surface state (e.g., soil moisture and surface temperature) and flux (e.g., evaporation and sensible heat flux) products, simulated by four land surface models (CLM, Mosaic, Noah and VIC). These products are now accessible at the Hydrology Data and Information Services Center (HDISC), a component of the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Current GLDAS data hosted at HDISC include a set of 1.0° data products, covering 1979 to the present, from the four models and a 0.25° data product, covering 2000 to the present, from the Noah model. In addition to the basic anonymous ftp data downloading, users can avail themselves of several advanced data search and downloading services, such as Mirador and OPeNDAP. Mirador is a Google-based search tool that provides keywords searching, on-the-fly spatial and parameter subsetting of selected data. OPeNDAP (Open-source Project for a Network Data Access Protocol) enables remote OPeNDAP clients to access OPeNDAP served data regardless of local storage format. Additional data services to be available in the near future from HDISC include (1) on-the-fly converter of GLDAS to NetCDF and binary data formats; (2) temporal aggregation of GLDAS files; and (3) Giovanni, an online visualization and analysis tool that provides a simple way to visualize, analyze, and access vast amounts of data without having to download the data.

  5. SeaView: bringing EarthCube to the Oceanographer

    NASA Astrophysics Data System (ADS)

    Stocks, K. I.; Diggs, S. C.; Arko, R. A.; Kinkade, D.; Shepherd, A.

    2016-12-01

    As new instrument types are developed, and new observational programs start, that support a growing community of "dry" oceanographers, the ability to find, access, and visualize existing data of interest becomes increasingly critical. Yet ocean data, when available, is are held in multiple data facilities, in different formats, and accessible through different pathways. This creates practical problems with integrating and working across different data sets. The SeaView project is building connections between the rich data resources in five major oceanographic data facilities - BCO-DMO, CCHDO, OBIS, OOI, and R2R* - creating a federated set of thematic data collections that are organized around common characteristics (geographic location, time, expedition, program, data type, etc.) and published online in Web Accessible Folders using standard file formats such as ODV and NetCDF. The work includes not simply reformatting data, but identifying and, where possible, addressing interoperability challenges: which common identifiers for core concepts can connect data across repositories, which terms a scientist may want to search that, if added to the data repositories, will increase discoverability; the presence of duplicate data across repositories, etc. We will present the data collections available to date, including data from the OOI Pioneer Array region, and seek scientists' input on the data types and formats they prefer, the tools they use to analyze and visualize data, and their specific recommendations for future data collections to support oceanographic science. * Biological and Chemical Oceanography Data Management Office (BCO-DMO), CLIVAR and Carbon Hydrographic Data Office (CCHDO), International Ocean Biogeographic Information System (iOBIS), Ocean Observatories Initiative (OOI), and Rolling Deck to Repository (R2R) Program.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, Michael J.

    Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less

  7. Web Based Data Access to the World Data Center for Climate

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Lautenschlager, M.

    2006-12-01

    The World Data Center for Climate (WDC-Climate, www.wdc-climate.de) is hosted by the Model &Data Group (M&D) of the Max Planck Institute for Meteorology. The M&D department is financed by the German government and uses the computers and mass storage facilities of the German Climate Computing Centre (Deutsches Klimarechenzentrum, DKRZ). The WDC-Climate provides web access to 200 Terabytes of climate data; the total mass storage archive contains nearly 4 Petabytes. Although the majority of the datasets concern model output data, some satellite and observational data are accessible as well. The underlying relational database is distributed on five servers. The CERA relational data model is used to integrate catalogue data and mass data. The flexibility of the model allows to store and access very different types of data and metadata. The CERA metadata catalogue provides easy access to the content of the CERA database as well as to other data in the web. Visit ceramodel.wdc-climate.de for additional information on the CERA data model. The majority of the users access data via the CERA metadata catalogue, which is open without registration. However, prior to retrieving data user are required to check in and apply for a userid and password. The CERA metadata catalogue is servlet based. So it is accessible worldwide through any web browser at cera.wdc-climate.de. In addition to data and metadata access by the web catalogue, WDC-Climate offers a number of other forms of web based data access. All metadata are available via http request as xml files in various metadata formats (ISO, DC, etc., see wini.wdc-climate.de) which allows for easy data interchange with other catalogues. Model data can be retrieved in GRIB, ASCII, NetCDF, and binary (IEEE) format. WDC-Climate serves as data centre for various projects. Since xml files are accessible by http, the integration of data into applications of different projects is very easy. Projects supported by WDC-Climate are e.g. CEOP, IPCC, and CARIBIC. A script tool for data download (jblob) is offered on the web page, to make retrieval of huge data quantities more comfortable.

  8. GENESIS SciFlo: Choreographing Interoperable Web Services on the Grid using a Semantically-Enabled Dataflow Execution Environment

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.

    2007-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* & Globus Alliance toolkits), and enables scientists to do multi- instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. SciFlo also publishes its own SOAP services for space/time query and subsetting of Earth Science datasets, and automated access to large datasets via lists of (FTP, HTTP, or DAP) URLs which point to on-line HDF or netCDF files. Typical distributed workflows obtain datasets by calling standard WMS/WCS servers or discovering and fetching data granules from ftp sites; invoke remote analysis operators available as SOAP services (interface described by a WSDL document); and merge results into binary containers (netCDF or HDF files) for further analysis using local executable operators. Naming conventions (HDFEOS and CF-1.0 for netCDF) are exploited to automatically understand and read on-line datasets. More interoperable conventions, and broader adoption of existing converntions, are vital if we are to "scale up" automated choreography of Web Services beyond toy applications. Recently, the ESIP Federation sponsored a collaborative activity in which several ESIP members developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, the benefits of doing collaborative science analysis at the "touch of a button" once services are connected, and further collaborations that are being pursued.

  9. Web processing service for landslide hazard assessment

    NASA Astrophysics Data System (ADS)

    Sandric, I.; Ursaru, P.; Chitu, D.; Mihai, B.; Savulescu, I.

    2012-04-01

    Hazard analysis requires heavy computation and specialized software. Web processing services can offer complex solutions that can be accessed through a light client (web or desktop). This paper presents a web processing service (both WPS and Esri Geoprocessing Service) for landslides hazard assessment. The web processing service was build with Esri ArcGIS Server solution and Python, developed using ArcPy, GDAL Python and NumPy. A complex model for landslide hazard analysis using both predisposing and triggering factors combined into a Bayesian temporal network with uncertainty propagation was build and published as WPS and Geoprocessing service using ArcGIS Standard Enterprise 10.1. The model uses as predisposing factors the first and second derivatives from DEM, the effective precipitations, runoff, lithology and land use. All these parameters can be served by the client from other WFS services or by uploading and processing the data on the server. The user can select the option of creating the first and second derivatives from the DEM automatically on the server or to upload the data already calculated. One of the main dynamic factors from the landslide analysis model is leaf area index. The LAI offers the advantage of modelling not just the changes from different time periods expressed in years, but also the seasonal changes in land use throughout a year. The LAI index can be derived from various satellite images or downloaded as a product. The upload of such data (time series) is possible using a NetCDF file format. The model is run in a monthly time step and for each time step all the parameters values, a-priory, conditional and posterior probability are obtained and stored in a log file. The validation process uses landslides that have occurred during the period up to the active time step and checks the records of the probabilities and parameters values for those times steps with the values of the active time step. Each time a landslide has been positive identified new a-priory probabilities are recorded for each parameter. A complete log for the entire model is saved and used for statistical analysis and a NETCDF file is created and it can be downloaded from the server with the log file

  10. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Palamuttam, R. S.; Mogrovejo, R. M.; Whitehall, K. D.; Mattmann, C. A.; Verma, R.; Waliser, D. E.; Lee, H.

    2015-12-01

    Remote sensing data and climate model output are multi-dimensional arrays of massive sizes locked away in heterogeneous file formats (HDF5/4, NetCDF 3/4) and metadata models (HDF-EOS, CF) making it difficult to perform multi-stage, iterative science processing since each stage requires writing and reading data to and from disk. We are developing a lightning fast Big Data technology called SciSpark based on ApacheTM Spark under a NASA AIST grant (PI Mattmann). Spark implements the map-reduce paradigm for parallel computing on a cluster, but emphasizes in-memory computation, "spilling" to disk only as needed, and so outperforms the disk-based ApacheTM Hadoop by 100x in memory and by 10x on disk. SciSpark will enable scalable model evaluation by executing large-scale comparisons of A-Train satellite observations to model grids on a cluster of 10 to 1000 compute nodes. This 2nd generation capability for NASA's Regional Climate Model Evaluation System (RCMES) will compute simple climate metrics at interactive speeds, and extend to quite sophisticated iterative algorithms such as machine-learning based clustering of temperature PDFs, and even graph-based algorithms for searching for Mesocale Convective Complexes. We have implemented a parallel data ingest capability in which the user specifies desired variables (arrays) as several time-sorted lists of URL's (i.e. using OPeNDAP model.nc?varname, or local files). The specified variables are partitioned by time/space and then each Spark node pulls its bundle of arrays into memory to begin a computation pipeline. We also investigated the performance of several N-dim. array libraries (scala breeze, java jblas & netlib-java, and ND4J). We are currently developing science codes using ND4J and studying memory behavior on the JVM. On the pyspark side, many of our science codes already use the numpy and SciPy ecosystems. The talk will cover: the architecture of SciSpark, the design of the scientific RDD (sRDD) data structure, our efforts to integrate climate science algorithms in Python and Scala, parallel ingest and partitioning of A-Train satellite observations from HDF files and model grids from netCDF files, first parallel runs to compute comparison statistics and PDF's, and first metrics quantifying parallel speedups and memory & disk usage.

  11. Cytometry standards continuum

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Spidlen, Josef; Brinkman, Ryan R.

    2008-02-01

    Introduction: The International Society for Analytical Cytology, ISAC, is developing a new combined flow and image Analytical Cytometry Standard (ACS). This standard needs to serve both the research and clinical communities. The clinical medicine and clinical research communities have a need to exchange information with hospital and other clinical information systems. Methods: 1) Prototype the standard by creating CytometryML and a RAW format for binary data. 2) Join the ISAC Data Standards Task Force. 3) Create essential project documentation. 4) Cooperate with other groups by assisting in the preparation of the DICOM Supplement 122: Specimen Module and Pathology Service-Object Pair Classes. Results: CytometryML has been created and serves as a prototype and source of experience for the following: the Analytical Cytometry Standard (ACS) 1.0, the ACS container, Minimum Information about a Flow Cytometry Experiment (MIFlowCyt), and Requirements for a Data File Standard Format to Describe Flow Cytometry and Related Analytical Cytology Data. These requirements provide a means to judge the appropriateness of design elements and to develop tests for the final ACS. The requirements include providing the information required for understanding and reproducing a cytometry experiment or clinical measurement, and for a single standard for both flow and digital microscopic cytometry. Schemas proposed by other members of the ISAC Data Standards Task Force (e.g, Gating-ML) have been independently validated and have been integrated with CytometryML. The use of netCDF as an element of the ACS container has been proposed by others and a suggested method of its use is proposed.

  12. ERDDAP - An Easier Way for Diverse Clients to Access Scientific Data From Diverse Sources

    NASA Astrophysics Data System (ADS)

    Mendelssohn, R.; Simons, R. A.

    2008-12-01

    ERDDAP is a new open-source, web-based service that aggregates data from other web services: OPeNDAP grid servers (THREDDS), OPeNDAP sequence servers (Dapper), NOS SOAP service, SOS (IOOS, OOStethys), microWFS, DiGIR (OBIS, BMDE). Regardless of the data source, ERDDAP makes all datasets available to clients via standard (and enhanced) DAP requests and makes some datasets accessible via WMS. A client's request also specifies the desired format for the results, e.g., .asc, .csv, .das, .dds, .dods, htmlTable, XHTML, .mat, netCDF, .kml, .png, or .pdf (formats more directly useful to clients). ERDDAP interprets a client request, requests the data from the data source (in the appropriate way), reformats the data source's response, and sends the result to the client. Thus ERDDAP makes data from diverse sources available to diverse clients via standardized interfaces. Clients don't have to install libraries to get data from ERDDAP because ERDDAP is RESTful and resource-oriented: a URL completely defines a data request and the URL can be used in any application that can send a URL and receive a file. This also makes it easy to use ERDDAP in mashups with other web services. ERDDAP could be extended to support other protocols. ERDDAP's hub and spoke architecture simplifies adding support for new types of data sources and new types of clients. ERDDAP includes metadata management support, catalog services, and services to make graphs and maps.

  13. The compression–error trade-off for large gridded data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silver, Jeremy D.; Zender, Charles S.

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  14. The compression–error trade-off for large gridded data sets

    DOE PAGES

    Silver, Jeremy D.; Zender, Charles S.

    2017-01-27

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  15. Comparing apples and oranges: the Community Intercomparison Suite

    NASA Astrophysics Data System (ADS)

    Schutgens, Nick; Stier, Philip; Pascoe, Stephen

    2014-05-01

    Visual representation and comparison of geoscientific datasets presents a huge challenge due to the large variety of file formats and spatio-temporal sampling of data (be they observations or simulations). The Community Intercomparison Suite attempts to greatly simplify these tasks for users by offering an intelligent but simple command line tool for visualisation and colocation of diverse datasets. In addition, CIS can subset and aggregate large datasets into smaller more manageable datasets. Our philosophy is to remove as much as possible the need for specialist knowledge by the user of the structure of a dataset. The colocation of observations with model data is as simple as: "cis col ::" which will resample the simulation data to the spatio-temporal sampling of the observations, contingent on a few user-defined options that specify a resampling kernel. CIS can deal with both gridded and ungridded datasets of 2, 3 or 4 spatio-temporal dimensions. It can handle different spatial coordinates (e.g. longitude or distance, altitude or pressure level). CIS supports both HDF, netCDF and ASCII file formats. The suite is written in Python with entirely publicly available open source dependencies. Plug-ins allow a high degree of user-moddability. A web-based developer hub includes a manual and simple examples. CIS is developed as open source code by a specialist IT company under supervision of scientists from the University of Oxford as part of investment in the JASMIN superdatacluster facility at the Centre of Environmental Data Archival.

  16. NetCDF-CF: Supporting Earth System Science with Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Davis, E.; Zender, C. S.; Arctur, D. K.; O'Brien, K.; Jelenak, A.; Santek, D.; Dixon, M. J.; Whiteaker, T. L.; Yang, K.

    2017-12-01

    NetCDF-CF is a community-developed convention for storing and describing earth system science data in the netCDF binary data format. It is an OGC recognized standard with numerous existing FOSS (Free and Open Source Software) and commercial software tools can explore, analyze, and visualize data that is stored and described as netCDF-CF data. To better support a larger segment of the earth system science community, a number of efforts are underway to extend the netCDF-CF convention with the goal of increasing the types of data that can be represented as netCDF-CF data. This presentation will provide an overview and update of work to extend the existing netCDF-CF convention. It will detail the types of earth system science data currently supported by netCDF-CF and the types of data targeted for support by current netCDF-CF convention development efforts. It will also describe some of the tools that support the use of netCDF-CF compliant datasets, the types of data they support, and efforts to extend them to handle the new data types that netCDF-CF will support.

  17. IRIS Earthquake Browser with Integration to the GEON IDV for 3-D Visualization of Hypocenters.

    NASA Astrophysics Data System (ADS)

    Weertman, B. R.

    2007-12-01

    We present a new generation of web based earthquake query tool - the IRIS Earthquake Browser (IEB). The IEB combines the DMC's large set of earthquake catalogs (provided by USGS/NEIC, ISC and the ANF) with the popular Google Maps web interface. With the IEB you can quickly and easily find earthquakes in any region of the globe. Using Google's detailed satellite images, earthquakes can be easily co-located with natural geographic features such as volcanoes as well as man made features such as commercial mines. A set of controls allow earthquakes to be filtered by time, magnitude, and depth range as well as catalog name, contributor name and magnitude type. Displayed events can be easily exported in NetCDF format into the GEON Integrated Data Viewer (IDV) where hypocenters may be visualized in three dimensions. Looking "under the hood", the IEB is based on AJAX technology and utilizes REST style web services hosted at the IRIS DMC. The IEB is part of a broader effort at the DMC aimed at making our data holdings available via web services. The IEB is useful both educationally and as a research tool.

  18. SeaWiFS technical report series. Volume 19: Case studies for SeaWiFS calibration and validation, part 2

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Acker, James G. (Editor); Firestone, Elaine R. (Editor); Mcclain, Charles R.; Fraser, Robert S.; Mclean, James T.; Darzi, Michael; Firestone, James K.; Patt, Frederick S.; Schieber, Brian D.

    1994-01-01

    This document provides brief reports, or case studies, on a number of investigations and data set development activities sponsored by the Calibration and Validation Team (CVT) within the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Project. Chapter 1 is a comparison with the atmospheric correction of Coastal Zone Color Scanner (CZCS) data using two independent radiative transfer formulations. Chapter 2 is a study on lunar reflectance at the SeaWiFS wavelengths which was useful in establishing the SeaWiFS lunar gain. Chapter 3 reports the results of the first ground-based solar calibration of the SeaWiFS instrument. The experiment was repeated in the fall of 1993 after the instrument was modified to reduce stray light; the results from the second experiment will be provided in the next case studies volume. Chapter 4 is a laboratory experiment using trap detectors which may be useful tools in the calibration round-robin program. Chapter 5 is the original data format evaluation study conducted in 1992 which outlines the technical criteria used in considering three candidate formats, the hierarchical data format (HDF), the common data format (CDF), and the network CDF (netCDF). Chapter 6 summarizes the meteorological data sets accumulated during the first three years of CZCS operation which are being used for initial testing of the operational SeaWiFS algorithms and systems and would be used during a second global processing of the CZCS data set. Chapter 7 describes how near-real time surface meteorological and total ozone data required for the atmospheric correction algorithm will be retrieved and processed. Finally, Chapter 8 is a comparison of surface wind products from various operational meteorological centers and field observations. Surface winds are used in the atmospheric correction scheme to estimate glint and foam radiances.

  19. Development of web-GIS system for analysis of georeferenced geophysical data

    NASA Astrophysics Data System (ADS)

    Okladnikov, I.; Gordov, E. P.; Titov, A. G.; Bogomolov, V. Y.; Genina, E.; Martynova, Y.; Shulgina, T. M.

    2012-12-01

    Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. The information-computational system consists of 4 basic parts: computational kernel developed using GNU Data Language (GDL), a set of PHP-controllers run within specialized web-portal, JavaScript class libraries for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology, and an archive of geophysical datasets. Computational kernel comprises of a number of dedicated modules for querying and extraction of data, mathematical and statistical data analysis, visualization, and preparing output files in geoTIFF and netCDF format containing processing results. Specialized web-portal consists of a web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript libraries aiming at graphical user interface development are based on GeoExt library combining ExtJS Framework and OpenLayers software. The archive of geophysical data consists of a number of structured environmental datasets represented by data files in netCDF, HDF, GRIB, ESRI Shapefile formats. For processing by the system are available: two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others. The system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #07.514.114044), projects IV.31.1.5, IV.31.2.7, RFBR grants #10-07-00547a, #11-05-01190a, and integrated project SB RAS #131.

  20. Development of a gridded meteorological dataset over Java island, Indonesia 1985-2014.

    PubMed

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-05-23

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985-2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology.

  1. SchemaOnRead: A Package for Schema-on-Read in R

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, Michael J.

    Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less

  2. ClimateNet: A Machine Learning dataset for Climate Science Research

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Biard, J.; Ganguly, S.; Ames, S.; Kashinath, K.; Kim, S. K.; Kahou, S.; Maharaj, T.; Beckham, C.; O'Brien, T. A.; Wehner, M. F.; Williams, D. N.; Kunkel, K.; Collins, W. D.

    2017-12-01

    Deep Learning techniques have revolutionized commercial applications in Computer vision, speech recognition and control systems. The key for all of these developments was the creation of a curated, labeled dataset ImageNet, for enabling multiple research groups around the world to develop methods, benchmark performance and compete with each other. The success of Deep Learning can be largely attributed to the broad availability of this dataset. Our empirical investigations have revealed that Deep Learning is similarly poised to benefit the task of pattern detection in climate science. Unfortunately, labeled datasets, a key pre-requisite for training, are hard to find. Individual research groups are typically interested in specialized weather patterns, making it hard to unify, and share datasets across groups and institutions. In this work, we are proposing ClimateNet: a labeled dataset that provides labeled instances of extreme weather patterns, as well as associated raw fields in model and observational output. We develop a schema in NetCDF to enumerate weather pattern classes/types, store bounding boxes, and pixel-masks. We are also working on a TensorFlow implementation to natively import such NetCDF datasets, and are providing a reference convolutional architecture for binary classification tasks. Our hope is that researchers in Climate Science, as well as ML/DL, will be able to use (and extend) ClimateNet to make rapid progress in the application of Deep Learning for Climate Science research.

  3. The Time Series Data Server (TSDS) for Standards-Compliant, Convenient, and Efficient Access to Time Series Data

    NASA Astrophysics Data System (ADS)

    Lindholm, D. M.; Weigel, R. S.; Wilson, A.; Ware Dewolfe, A.

    2009-12-01

    Data analysis in the physical sciences is often plagued by the difficulty in acquiring the desired data. A great deal of work has been done in the area of metadata and data discovery, however, many such discoveries simply provide links that lead directly to a data file. Often these files are impractically large, containing more time samples or variables than desired, and are slow to access. Once these files are downloaded, format issues further complicate using the data. Some data servers have begun to address these problems by improving data virtualization and ease of use. However, these services often don't scale to large datasets. Also, the generic nature of the data models used by these servers, while providing greater flexibility, may complicate setting up such a service for data providers and limit sufficient semantics that would otherwise simplify use for clients, machine or human. The Time Series Data Server (TSDS) aims to address these problems within the limited, yet common, domain of time series data. With the simplifying assumption that all data products served are a function of time, the server can optimize for data access based on time subsets, a common use case. The server also supports requests for specific variables, which can be of type scalar, structure, or sequence. It also supports data types with higher level semantics, such as "spectrum." The TSDS is implemented using Java Servlet technology and can be dropped into any servlet container and customized for a data provider's needs. The interface is based on OPeNDAP (http://opendap.org) and conforms to the Data Acces Protocol (DAP) 2.0, a NASA standard (ESDS-RFC-004), which defines a simple HTTP request and response paradigm. Thus a TSDS server instance is a compliant OPeNDAP server that can be accessed by any OPeNDAP client or directly via RESTful web service requests. The TSDS reads the data that it serves into a common data model via the NetCDF Markup Language (NcML, http://www.unidata.ucar.edu/software/netcdf/ncml/) which enables dataset virtualization. An NcML file can expose a single file, a subset, or an aggregation of files as a single, logical dataset. With the appropriate NcML adapter, the TSDS can read data from its native format, eliminating the need for data providers to reformat their data and lowering the barrier for integration. Data can even be read via remote services which is important for enabling VxOs to be truly virtual. The TSDS provides reading, writing, and filtering capabilities through a modular framework. A collection of standard modules is available and customized modules are easy to create and integrate. This way the TSDS can read and write data in a variety of formats and apply filters to them an a manner customizable to meet the needs of both the data providers and consumers. The TSDS server is currently in use serving solar irradiance data from the LASP Interactive Solar IRradiance Datacenter (LISIRD, http://lasp.colorado.edu/lisird/), and is being introduced into the space physics virtual observatory community. The TSDS software is Open Source and available at SourceForge.

  4. The Basic Radar Altimetry Toolbox for Sentinel 3 Users

    NASA Astrophysics Data System (ADS)

    Lucas, Bruno; Rosmorduc, Vinca; Niemeijer, Sander; Bronner, Emilie; Dinardo, Salvatore; Benveniste, Jérôme

    2013-04-01

    The Basic Radar Altimetry Toolbox (BRAT) is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2006 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Etudes Spatiales). The latest version of the software, 3.1, was released on March 2012. The tools enable users to interact with the most common altimetry data formats, being the most used way, the Graphical User Interface (BratGui). This GUI is a front-end for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with Matlab/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the data-formatting hassle. The BratDisplay (graphic visualizer) can be launched from BratGui, or used as a stand-alone tool to visualize netCDF files - it is distributed with another ESA toolbox (GUT) as the visualizer. The most frequent uses of BRAT are teaching remote sensing, altimetry data reading (all missions from ERS-1 to Saral and soon Sentinel-3), quick data visualization/export and simple computation on the data fields. BRAT can be used for importing data and having a quick look at his contents, with several different types of plotting available. One can also use it to translate the data into other formats such as netCDF, ASCII text files, KML (Google Earth) and raster images (JPEG, PNG, etc.). Several kinds of computations can be done within BratGui involving combinations of data fields that the user can save for posterior reuse or using the already embedded formulas that include the standard oceanographic altimetry formulas (MSS, -SSH, MSLA, editing of spurious data, etc.). The documentation collection includes the standard user manual explaining all the ways to interact with the set of software tools but the most important item is the Radar Altimeter Tutorial, that contains a strong introduction to altimetry, showing its applications in different fields such as Oceanography, Cryosphere, Geodesy, Hydrology among others. Included are also "data use cases", with step-by-step examples, on how to use the toolbox in the different contexts. The upcoming release that is on the forge will focus on Sentinel 3 Surface Topography Mission that is build on the successful heritage of ERS, Envisat and Cryosat. The first of the two sentinel is expected to be launched in 2014. It will have on-board a dual-frequency (Ku and C band) advanced Synthetic Aperture Radar Altimeter and will provide measurements at a resolution of ~300m in SAR mode along track. Sentinel 3 will provide exact measurements of sea-surface height along with accurate topography measurements over sea ice, ice sheets, rivers and lakes. The future version will provide, among other enhancements, support for reading the upcoming S3 datasets and specific "use-cases" for SAR altimetry in order to train the users and made them aware of the great potential of SAR altimetery for coastal and inland applications. The BRAT software is distributed under the GNU GPL open-source license and can be obtained, along with all the documentation (including the tutorial), on the webstite: http://earth.esa.int/brat

  5. Ocean data management in OMP Data Service

    NASA Astrophysics Data System (ADS)

    Fleury, Laurence; André, François; Belmahfoud, Nizar; Boichard, Jean-Luc; Brissebrat, Guillaume; Ferré, Hélène; Mière, Arnaud

    2014-05-01

    The Observatoire Midi-Pyrénées Data Service (SEDOO) is a development team, dedicated to environmental data management and dissemination application set up, in the framework of intensive field campaigns and long term observation networks. SEDOO developped some applications dealing with ocean data only, but also generic databases that enable to store and distribute multidisciplinary datasets. SEDOO is in charge of the in situ data management and the data portal for international and multidisciplinary programmes as large as African Monsoon Multidisciplinary Analyses (AMMA) and Mediterranean Integrated STudies at Regional And Local Scales (MISTRALS). The AMMA and MISTRALS databases are distributed and the data portals enable to access datasets managed by other data centres (IPSL, CORIOLIS...) through interoperability protocols (OPeNDAP, xml requests...). AMMA and MISTRALS metadata (data description) are standardized and comply with international standards (ISO 19115-19139; INSPIRE European Directive; Global Change Master Directory Thesaurus). Most of the AMMA and MISTRALS in situ ocean data sets are homogenized and inserted in a relational database, in order to enable accurate data selection and download of different data sets in a shared format. Data selection criteria are location, period, physical property name, physical property range... The data extraction procedure include format output selection among CSV, NetCDF, Nasa Ames... The AMMA database - http://database.amma-international.org/ - contains field campaign observations in the Guinea Gulf (EGEE 2005-2007) and Atlantic Tropical Ocean (AEROSE-II 2006...), as well as long term monitoring data (PIRATA, ARGO...). Operational analysis (MERCATOR) and satellite products (TMI, SSMI...) are managed by IPSL data centre and can be accessed too. They have been projected over regular latitude-longitude grids and converted into the NetCDF format. The MISTRALS data portal - http://mistrals.sedoo.fr/ - enables to access ocean datasets produced by the contributing programmes: Hydrological cycle in the Mediterranean eXperiment (HyMeX), Chemistry-Aerosol Mediterranean eXperiment (ChArMEx), Marine Mediterranean eXperiment (MERMeX)... The programmes include many field campaigns from 2011 to 2015, collecting general and specific properties. Long term monitoring networks, like Mediterranean Ocean Observing System on Environment (MOOSE) or Mediterranean Eurocentre for Underwater Sciences and Technologies (MEUST-SE), contribute to the MISTRALS data portal as well. Relevant model outputs and satellite products managed by external data centres (IPSL, ENEA...) can be accessed too. SEDOO manages the SSS (Sea Surface Salinity) national observation service data: http://sss.sedoo.fr/. SSS aims at collecting, validating, archiving and distributing in situ SSS measurements derived from Voluntary Observing Ship programs. The SSS data user interface enables to built multicriteria data request and download relevant datasets. SEDOO contributes to the SOLWARA project that aims at understanding the oceanic circulation in the Coral Sea and the Solomon Sea and their role in both the climate system and the oceanic chemistry. The research programme include in situ measurements, numerical modelling and compiled analyses of past data. The website http://thredds.sedoo.fr/solwara/ enables to access, visualize and download Solwara gridded data and model simulations, using Thredds associated services (OPEnDAP, NCSS and WMS). In order to improve the application user-friendliness, SSS and SOLWARA web interfaces are JEE applications build with GWT Framework, and share many modules.

  6. A proposed-standard format to represent and distribute tomographic models and other earth spatial data

    NASA Astrophysics Data System (ADS)

    Postpischl, L.; Morelli, A.; Danecek, P.

    2009-04-01

    Formats used to represent (and distribute) tomographic earth models differ considerably and are rarely self-consistent. In fact, each earth scientist, or research group, uses specific conventions to encode the various parameterizations used to describe, e.g., seismic wave speed or density in three dimensions, and complete information is often found in related documents or publications (if available at all) only. As a consequence, use of various tomographic models from different authors requires considerable effort, is more cumbersome than it should be and prevents widespread exchange and circulation within the community. We propose a format, based on modern web standards, able to represent different (grid-based) model parameterizations within the same simple text-based environment, easy to write, to parse, and to visualise. The aim is the creation of self-describing data-structures, both human and machine readable, that are automatically recognised by general-purpose software agents, and easily imported in the scientific programming environment. We think that the adoption of such a representation as a standard for the exchange and distribution of earth models can greatly ease their usage and enhance their circulation, both among fellow seismologists and among a broader non-specialist community. The proposed solution uses semantic web technologies, fully fitting the current trends in data accessibility. It is based on Json (JavaScript Object Notation), a plain-text, human-readable lightweight computer data interchange format, which adopts a hierarchical name-value model for representing simple data structures and associative arrays (called objects). Our implementation allows integration of large datasets with metadata (authors, affiliations, bibliographic references, units of measure etc.) into a single resource. It is equally suited to represent other geo-referenced volumetric quantities — beyond tomographic models — as well as (structured and unstructured) computational meshes. This approach can exploit the capabilities of the web browser as a computing platform: a series of in-page quick tools for comparative analysis between models will be presented, as well as visualisation techniques for tomographic layers in Google Maps and Google Earth. We are working on tools for conversion into common scientific format like netCDF, to allow easy visualisation in GEON-IDV or gmt.

  7. Rescuing Seasat-A from 1980

    NASA Astrophysics Data System (ADS)

    Hausman, J.; Sanchez, A.; Armstrong, E. M.

    2014-12-01

    Seasat-A was NASA's first ocean observing satellite mission. It launched in June 1978 and operated continuously until it suffered a power failure 106 days later. It contained an altimeter (ALT), scatterometer (SASS), SAR, microwave radiometer (SMMR), and a visible/infrared radiometer (VIRR). These instruments allowed Seasat to measure sea surface height, ocean winds and both brightness and sea surface temperatures. The data, except for the SAR, are archived at PO.DAAC. Since these are the only oceanographic satellite data available for this early period of remote sensing, their importance has grown for use in climate studies. Even though the datasets were digitized from the original tapes, the Seasat data have since still been maintained in the same flat binary format technology of 1980 when the data were first distributed. In 2013 PO.DAAC began a project to reformat the original data into a user friendly, modern and maintainable format consistent with the netCDF data model and Climate Forecast (CF) and Attribute Conventions Dataset Discovery (ACDD) metadata standards. A significant benefit of using this data format includes the improved interoperability with tools and web services such as OPeNDAP, THREDDS, and various subsetting software, such as PO.DAAC's HiTIDE. Additionally, application of such metadata standards provides an opportunity to correctly document the data at the granule level. The first step in the conversion process involved going through the original documentation to understand the source binary data format. Documentation was found for processing levels 1 and 2 for ALT, SASS and SMMR. Software readers were then written for each of the datasets using Matlab , followed by regression tests performed on the newly outputted data in order to demonstrate that the readers were correctly interpreting the source data. Next, writers were created to convert the data into the updated format. The reformatted data were also regression tested and science validated to ensure that the data were not corrupted during the reformatting process. The resulting modernized Seasat datasets will be made available iteratively by instrument and processing level on PO.DAAC's web portal http://podaac.jpl.nasa.gov, anonymous ftp site, ftp://podaac.jpl.nasa.gov/allData/seasat and other web services.

  8. Wave and Wind Model Performance Metrics Tools

    NASA Astrophysics Data System (ADS)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base with which statistics are readily calculated, for the short or long term. Such a system has potential for a quick transition to operations at NAVOCEANO.

  9. Globally Gridded Satellite observations for climate studies

    USGS Publications Warehouse

    Knapp, K.R.; Ansari, S.; Bain, C.L.; Bourassa, M.A.; Dickinson, M.J.; Funk, Chris; Helms, C.N.; Hennon, C.C.; Holmes, C.D.; Huffman, G.J.; Kossin, J.P.; Lee, H.-T.; Loew, A.; Magnusdottir, G.

    2011-01-01

    Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them that no central archive of geostationary data for all international satellites exists, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multisatellite climate studies. The International Satellite Cloud Climatology Project (ISCCP) set the stage for overcoming these issues by archiving a subset of the full-resolution geostationary data at ~10-km resolution at 3-hourly intervals since 1983. Recent efforts at NOAA's National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in Network Common Data Format (netCDF) using standards that permit a wide variety of tools and libraries to process the data quickly and easily. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.

  10. An information model for managing multi-dimensional gridded data in a GIS

    NASA Astrophysics Data System (ADS)

    Xu, H.; Abdul-Kadar, F.; Gao, P.

    2016-04-01

    Earth observation agencies like NASA and NOAA produce huge volumes of historical, near real-time, and forecasting data representing terrestrial, atmospheric, and oceanic phenomena. The data drives climatological and meteorological studies, and underpins operations ranging from weather pattern prediction and forest fire monitoring to global vegetation analysis. These gridded data sets are distributed mostly as files in HDF, GRIB, or netCDF format and quantify variables like precipitation, soil moisture, or sea surface temperature, along one or more dimensions like time and depth. Although the data cube is a well-studied model for storing and analyzing multi-dimensional data, the GIS community remains in need of a solution that simplifies interactions with the data, and elegantly fits with existing database schemas and dissemination protocols. This paper presents an information model that enables Geographic Information Systems (GIS) to efficiently catalog very large heterogeneous collections of geospatially-referenced multi-dimensional rasters—towards providing unified access to the resulting multivariate hypercubes. We show how the implementation of the model encapsulates format-specific variations and provides unified access to data along any dimension. We discuss how this framework lends itself to familiar GIS concepts like image mosaics, vector field visualization, layer animation, distributed data access via web services, and scientific computing. Global data sources like MODIS from USGS and HYCOM from NOAA illustrate how one would employ this framework for cataloging, querying, and intuitively visualizing such hypercubes. ArcGIS—an established platform for processing, analyzing, and visualizing geospatial data—serves to demonstrate how this integration brings the full power of GIS to the scientific community.

  11. A Framework for the Generation and Dissemination of Drop Size Distribution (DSD) Characteristics Using Multiple Platforms

    NASA Technical Reports Server (NTRS)

    Wolf, David B.; Tokay, Ali; Petersen, Walt; Williams, Christopher; Gatlin, Patrick; Wingo, Mathew

    2010-01-01

    Proper characterization of the precipitation drop size distribution (DSD) is integral to providing realistic and accurate space- and ground-based precipitation retrievals. Current technology allows for the development of DSD products from a variety of platforms, including disdrometers, vertical profilers and dual-polarization radars. Up to now, however, the dissemination or availability of such products has been limited to individual sites and/or field campaigns, in a variety of formats, often using inconsistent algorithms for computing the integral DSD parameters, such as the median- and mass-weighted drop diameter, total number concentration, liquid water content, rain rate, etc. We propose to develop a framework for the generation and dissemination of DSD characteristic products using a unified structure, capable of handling the myriad collection of disdrometers, profilers, and dual-polarization radar data currently available and to be collected during several upcoming GPM Ground Validation field campaigns. This DSD super-structure paradigm is an adaptation of the radar super-structure developed for NASA s Radar Software Library (RSL) and RSL_in_IDL. The goal is to provide the DSD products in a well-documented format, most likely NetCDF, along with tools to ingest and analyze the products. In so doing, we can develop a robust archive of DSD products from multiple sites and platforms, which should greatly benefit the development and validation of precipitation retrieval algorithms for GPM and other precipitation missions. An outline of this proposed framework will be provided as well as a discussion of the algorithms used to calculate the DSD parameters.

  12. Globally Gridded Satellite (GridSat) Observations for Climate Studies

    NASA Technical Reports Server (NTRS)

    Knapp, Kenneth R.; Ansari, Steve; Bain, Caroline L.; Bourassa, Mark A.; Dickinson, Michael J.; Funk, Chris; Helms, Chip N.; Hennon, Christopher C.; Holmes, Christopher D.; Huffman, George J.; hide

    2012-01-01

    Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them: there is no central archive of geostationary data for all international satellites, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multi-satellite climate studies. The International Satellite Cloud Climatology Project set the stage for overcoming these issues by archiving a subset of the full resolution geostationary data at approx.10 km resolution at 3 hourly intervals since 1983. Recent efforts at NOAA s National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in the netCDF format using standards that permit a wide variety of tools and libraries to quickly and easily process the data. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.

  13. Trade Study: Storing NASA HDF5/netCDF-4 Data in the Amazon Cloud and Retrieving Data via Hyrax Server / THREDDS Data Server

    NASA Technical Reports Server (NTRS)

    Habermann, Ted; Jelenak, Aleksander; Lee, Joe; Yang, Kent; Gallagher, James; Potter, Nathan

    2017-01-01

    As part of the overall effort to understand implications of migrating ESDIS data and services to the cloud we are testing several common OPeNDAP and HDF use cases against three architectures for general performance and cost characteristics. The architectures include retrieving entire files, retrieving datasets using HTTP range gets, and retrieving elements of datasets (chunks) with HTTP range gets. We will describe these architectures and discuss our approach to estimating cost.

  14. ParCAT: A Parallel Climate Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Haugen, B.; Smith, B.; Steed, C.; Ricciuto, D. M.; Thornton, P. E.; Shipman, G.

    2012-12-01

    Climate science has employed increasingly complex models and simulations to analyze the past and predict the future of our climate. The size and dimensionality of climate simulation data has been growing with the complexity of the models. This growth in data is creating a widening gap between the data being produced and the tools necessary to analyze large, high dimensional data sets. With single run data sets increasing into 10's, 100's and even 1000's of gigabytes, parallel computing tools are becoming a necessity in order to analyze and compare climate simulation data. The Parallel Climate Analysis Toolkit (ParCAT) provides basic tools that efficiently use parallel computing techniques to narrow the gap between data set size and analysis tools. ParCAT was created as a collaborative effort between climate scientists and computer scientists in order to provide efficient parallel implementations of the computing tools that are of use to climate scientists. Some of the basic functionalities included in the toolkit are the ability to compute spatio-temporal means and variances, differences between two runs and histograms of the values in a data set. ParCAT is designed to facilitate the "heavy lifting" that is required for large, multidimensional data sets. The toolkit does not focus on performing the final visualizations and presentation of results but rather, reducing large data sets to smaller, more manageable summaries. The output from ParCAT is provided in commonly used file formats (NetCDF, CSV, ASCII) to allow for simple integration with other tools. The toolkit is currently implemented as a command line utility, but will likely also provide a C library for developers interested in tighter software integration. Elements of the toolkit are already being incorporated into projects such as UV-CDAT and CMDX. There is also an effort underway to implement portions of the CCSM Land Model Diagnostics package using ParCAT in conjunction with Python and gnuplot. ParCAT is implemented in C to provide efficient file IO. The file IO operations in the toolkit use the parallel-netcdf library; this enables the code to use the parallel IO capabilities of modern HPC systems. Analysis that currently requires an estimated 12+ hours with the traditional CCSM Land Model Diagnostics Package can now be performed in as little as 30 minutes on a single desktop workstation and a few minutes for relatively small jobs completed on modern HPC systems such as ORNL's Jaguar.

  15. ABINIT: First-principles approach to material and nanosystem properties

    NASA Astrophysics Data System (ADS)

    Gonze, X.; Amadon, B.; Anglade, P.-M.; Beuken, J.-M.; Bottin, F.; Boulanger, P.; Bruneval, F.; Caliste, D.; Caracas, R.; Côté, M.; Deutsch, T.; Genovese, L.; Ghosez, Ph.; Giantomassi, M.; Goedecker, S.; Hamann, D. R.; Hermet, P.; Jollet, F.; Jomard, G.; Leroux, S.; Mancini, M.; Mazevet, S.; Oliveira, M. J. T.; Onida, G.; Pouillon, Y.; Rangel, T.; Rignanese, G.-M.; Sangalli, D.; Shaltaf, R.; Torrent, M.; Verstraete, M. J.; Zerah, G.; Zwanziger, J. W.

    2009-12-01

    ABINIT [ http://www.abinit.org] allows one to study, from first-principles, systems made of electrons and nuclei (e.g. periodic solids, molecules, nanostructures, etc.), on the basis of Density-Functional Theory (DFT) and Many-Body Perturbation Theory. Beyond the computation of the total energy, charge density and electronic structure of such systems, ABINIT also implements many dynamical, dielectric, thermodynamical, mechanical, or electronic properties, at different levels of approximation. The present paper provides an exhaustive account of the capabilities of ABINIT. It should be helpful to scientists that are not familiarized with ABINIT, as well as to already regular users. First, we give a broad overview of ABINIT, including the list of the capabilities and how to access them. Then, we present in more details the recent, advanced, developments of ABINIT, with adequate references to the underlying theory, as well as the relevant input variables, tests and, if available, ABINIT tutorials. Program summaryProgram title: ABINIT Catalogue identifier: AEEU_v1_0 Distribution format: tar.gz Journal reference: Comput. Phys. Comm. Programming language: Fortran95, PERL scripts, Python scripts Computer: All systems with a Fortran95 compiler Operating system: All systems with a Fortran95 compiler Has the code been vectorized or parallelized?: Sequential, or parallel with proven speed-up up to one thousand processors. RAM: Ranges from a few Mbytes to several hundred Gbytes, depending on the input file. Classification: 7.3, 7.8 External routines: (all optional) BigDFT [1], ETSF IO [2], libxc [3], NetCDF [4], MPI [5], Wannier90 [6] Nature of problem: This package has the purpose of computing accurately material and nanostructure properties: electronic structure, bond lengths, bond angles, primitive cell size, cohesive energy, dielectric properties, vibrational properties, elastic properties, optical properties, magnetic properties, non-linear couplings, electronic and vibrational lifetimes, etc. Solution method: Software application based on Density-Functional Theory and Many-Body Perturbation Theory, pseudopotentials, with planewaves, Projector-Augmented Waves (PAW) or wavelets as basis functions. Running time: From less than one second for the simplest tests, to several weeks. The vast majority of the >600 provided tests run in less than 30 seconds. References:[1] http://inac.cea.fr/LSim/BigDFT. [2] http://etsf.eu/index.php?page=standardization. [3] http://www.tddft.org/programs/octopus/wiki/index.php/Libxc. [4] http://www.unidata.ucar.edu/software/netcdf. [5] http://en.wikipedia.org/wiki/MessagePassingInterface. [6] http://www.wannier.org.

  16. FastQuery: A Parallel Indexing System for Scientific Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Jerry; Wu, Kesheng; Prabhat,

    2011-07-29

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the- art index and query technologies such as FastBit can significantly improve accesses to these datasets by augmenting the user data with indexes and other secondary information. However, a challenge is that the indexes assume the relational data model but the scientific data generally follows the array data model. To match the two data models, we design a generic mapping mechanism and implement an efficient input and output interface for reading and writing the data and their corresponding indexes. To take advantage of the emerging many-core architectures, we also developmore » a parallel strategy for indexing using threading technology. This approach complements our on-going MPI-based parallelization efforts. We demonstrate the flexibility of our software by applying it to two of the most commonly used scientific data formats, HDF5 and NetCDF. We present two case studies using data from a particle accelerator model and a global climate model. We also conducted a detailed performance study using these scientific datasets. The results show that FastQuery speeds up the query time by a factor of 2.5x to 50x, and it reduces the indexing time by a factor of 16 on 24 cores.« less

  17. Cloud-Enabled Climate Analytics-as-a-Service using Reanalysis data: A case study.

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Duffy, D.; Schnase, J. L.; McInerney, M.; Tamkin, G.; Potter, G. L.; Thompson, J. H.

    2014-12-01

    The NASA Center for Climate Simulation (NCCS) maintains advanced data capabilities and facilities that allow researchers to access the enormous volume of data generated by weather and climate models. The NASA Climate Model Data Service (CDS) and the NCCS are merging their efforts to provide Climate Analytics-as-a-Service for the comparative study of the major reanalysis projects: ECMWF ERA-Interim, NASA/GMAO MERRA, NOAA/NCEP CFSR, NOAA/ESRL 20CR, JMA JRA25, and JRA55. These reanalyses have been repackaged to netCDF4 file format following the CMIP5 Climate and Forecast (CF) metadata convention prior to be sequenced into the Hadoop Distributed File System ( HDFS ). A small set of operations that represent a common starting point in many analysis workflows was then created: min, max, sum, count, variance and average. In this example, Reanalysis data exploration was performed with the use of Hadoop MapReduce and accessibility was achieved using the Climate Data Service(CDS) application programming interface (API) created at NCCS. This API provides a uniform treatment of large amount of data. In this case study, we have limited our exploration to 2 variables, temperature and precipitation, using 3 operations, min, max and avg and using 30-year of Reanalysis data for 3 regions of the world: global, polar, subtropical.

  18. A Long-Term and Reproducible Passive Microwave Sea Ice Concentration Data Record for Climate Studies and Monitoring

    NASA Technical Reports Server (NTRS)

    Peng, G.; Meier, W. N.; Scott, D. J.; Savoie, M. H.

    2013-01-01

    A long-term, consistent, and reproducible satellite-based passive microwave sea ice concentration climate data record (CDR) is available for climate studies, monitoring, and model validation with an initial operation capability (IOC). The daily and monthly sea ice concentration data are on the National Snow and Ice Data Center (NSIDC) polar stereographic grid with nominal 25 km × 25 km grid cells in both the Southern and Northern Hemisphere polar regions from 9 July 1987 to 31 December 2007. The data files are available in the NetCDF data format at http://nsidc.org/data/g02202.html and archived by the National Climatic Data Center (NCDC) of the National Oceanic and Atmospheric Administration (NOAA) under the satellite climate data record program (http://www.ncdc.noaa.gov/cdr/operationalcdrs.html). The description and basic characteristics of the NOAA/NSIDC passive microwave sea ice concentration CDR are presented here. The CDR provides similar spatial and temporal variability as the heritage products to the user communities with the additional documentation, traceability, and reproducibility that meet current standards and guidelines for climate data records. The data set, along with detailed data processing steps and error source information, can be found at http://dx.doi.org/10.7265/N5B56GN3.

  19. Efforts to integrate CMIP metadata and standards into NOAA-GFDL's climate model workflow

    NASA Astrophysics Data System (ADS)

    Blanton, C.; Lee, M.; Mason, E. E.; Radhakrishnan, A.

    2017-12-01

    Modeling centers participating in CMIP6 run model simulations, publish requested model output (conforming to community data standards), and document models and simulations using ES-DOC. GFDL developed workflow software implementing some best practices to meet these metadata and documentation requirements. The CMIP6 Data Request defines the variables that should be archived for each experiment and specifies their spatial and temporal structure. We used the Data Request's dreqPy python library to write GFDL model configuration files as an alternative to hand-crafted tables. There was also a largely successful effort to standardize variable names within the model to reduce the additional overhead of translating "GFDL to CMOR" variables at a later stage in the pipeline. The ES-DOC ecosystem provides tools and standards to create, publish, and view various types of community-defined CIM documents, most notably model and simulation documents. Although ES-DOC will automatically create simulation documents during publishing by harvesting NetCDF global attributes, the information must be collected, stored, and placed in the NetCDF files by the workflow. We propose to develop a GUI to collect the simulation document precursors. In addition, a new MIP for CMIP6-CPMIP, a comparison of computational performance of climate models-is documented using machine and performance CIM documents. We used ES-DOC's pyesdoc python library to automatically create these machine and performance documents. We hope that these and similar efforts will become permanent features of the GFDL workflow to facilitate future participation in CMIP-like activities.

  20. EMODNet Hydrography - Seabed Mapping - Developing a higher resolution digital bathymetry for the European seas

    NASA Astrophysics Data System (ADS)

    Schaap, Dick M. A.; Moussat, Eric

    2013-04-01

    In December 2007 the European Parliament and Council adopted the Marine Strategy Framework Directive (MSFD) which aims to achieve environmentally healthy marine waters by 2020. This Directive includes an initiative for an overarching European Marine Observation and Data Network (EMODNet). The EMODNet Hydrography - Seabed Mapping projects made good progress in developing the EMODNet Hydrography portal to provide overview and access to available bathymetric survey datasets and to generate an harmonised digital bathymetry for Europe's sea basins. Up till end 2012 more than 8400 bathymetric survey datasets, managed by 14 data centres from 9 countries and originated from 118 institutes, have been gathered and populated in the EMODNet Hydrography Data Discovery and Access service, adopting SeaDataNet standards. These datasets have been used as input for analysing and generating the EMODNet digital terrain model (DTM), so far for the following sea basins: • the Greater North Sea, including the Kattegat • the English Channel and Celtic Seas • Western and Central Mediterranean Sea and Ionian Sea • Bay of Biscay, Iberian coast and North-East Atlantic • Adriatic Sea • Aegean - Levantine Sea (Eastern Mediterranean). • Azores - Madeira EEZ The Hydrography Viewing service gives users wide functionality for viewing and downloading the EMODNet digital bathymetry: • water depth in gridded form on a DTM grid of a quarter a minute of longitude and latitude • option to view QC parameters of individual DTM cells and references to source data • option to download DTM tiles in different formats: ESRI ASCII, XYZ, CSV, NetCDF (CF), GeoTiff and SD for Fledermaus 3 D viewer software • option for users to create their Personal Layer and to upload multibeam survey ASCII datasets for automatic processing into personal DTMs following the EMODNet standards The NetCDF (CF) DTM files are fit for use in a special 3D Viewer software package which is based on the existing open source NASA World Wind JSK application. It has been developed in the frame of the EU Geo-Seas project (another sibling of SeaDataNet for marine geological and geophysical data) and is freely available. The 3D viewer also supports the ingestion of WMS overlay maps. The EMODNet consortium is actively seeking cooperation with Hydrographic Offices, research institutes, authorities and private organisations for additional data sets (single and multibeam surveys, sounding tracks, composite products) to contribute to an even better geographical coverage. These datasets will be used for upgrading and extending the EMODNet regional Digital Terrain Models (DTM). The datasets themselves are not distributed but described in the metadata service, giving clear information about the background survey data used for the DTM, their access restrictions, originators and distributors and facilitating requests by users to originators. This way the portal provides originators of bathymetric data sets an attractive shop window for promoting their data sets to potential users, without losing control. The EMODNet Hydrography Consortium consists of MARIS (NL), ATLIS (NL), IFREMER (FR), SHOM (FR), IEO (ES), GSI (IE), NERC-NOCS (UK), OGS (IT), HCMR (GR), and UNEP/GRID-Arendal (NO) with associate partners CNR-ISMAR (IT), OGS-RIMA (IT), IHPT (PT), and LNEG (PT). Website: http://www.emodnet-hydrography.eu

  1. Development of Extended Content Standards for Biodiversity Data

    NASA Astrophysics Data System (ADS)

    Hugo, Wim; Schmidt, Jochen; Saarenmaa, Hannu

    2013-04-01

    Interoperability in the field of Biodiversity observation has been strongly driven by the development of a number of global initiatives (GEO, GBIF, OGC, TDWG, GenBank, …) and its supporting standards (OGC-WxS, OGC-SOS, Darwin Core (DwC), NetCDF, …). To a large extent, these initiatives have focused on discoverability and standardization of syntactic and schematic interoperability. Semantic interoperability is more complex, requiring development of domain-dependent conceptual data models, and extension of these models with appropriate ontologies (typically manifested as controlled vocabularies). Biodiversity content has been standardized partly, for example through Darwin Core for occurrence data and associated taxonomy, and through Genbank for genetic data, but other contexts of biodiversity observation have lagged behind - making it difficult to achieve semantic interoperability between distributed data sources. With this in mind, WG8 of GEO BON (charged with data and systems interoperability) has started a work programme to address a number of concerns, one of which is the gap in content standards required to make Biodiversity data truly interoperable. The paper reports on the framework developed by WG8 for the classification of Biodiversity observation data into 'families' of use cases and its supporting data schema, where gaps, if any, in the availability if content standards have been identified, and how these are to be addressed by way of an abstract data model and the development of associated content standards. It is proposed that a minimum set of standards (1) will be required to address the scope of Biodiversity content, aligned with levels and dimensions of observation, and based on the 'Essential Biodiversity Variables' (2) being developed by GEO BON . The content standards are envisaged as loosely separated from the syntactic and schematic standards used for the base data exchange: typically, services would offer an existing data standard (DwC, WFS, SOS, NetCDF), with a use-case dependent 'payload' embedded into the data stream. This enables the re-use of the abstract schema, and sometimes the implementation specification (for example XML, JSON, or NetCDF conventions) across services. An explicit aim will be to make the XML implementation specification re-usable as a DwC and a GML (SOS end WFS) extension. (1) Olga Lyashevska, Keith D. Farnsworth, How many dimensions of biodiversity do we need?, Ecological Indicators, Volume 18, July 2012, Pages 485-492, ISSN 1470-160X, 10.1016/j.ecolind.2011.12.016. (2) GEO BON: Workshop on Essential Biodiversity Variables (27-29 February 2012, Frascati, Italy). (http://www.earthobservations.org/geobon_docs_20120227.shtml)

  2. CERA - the technical basis for WDCC

    NASA Astrophysics Data System (ADS)

    Thiemann, Hannes; Lautenschlager, Michael

    2010-05-01

    The World Data Centre for Climate (WDCC) is hosted by the German Climate Computing Centre (DKRZ). It collects, stores and disseminates data for climate research in order to serve the scientific community. Emphasis hereby is spent on climate modelling and related data products. CERA (Climate and environmental retrieval and archive) is the infrastructure hosting data and metadata from WDCC. Data originates from projects like IPCC (and IPCC-DDC), ENSEMBLES, COPS and several others. Currently more than 400 terabytes of data are managed within CERA. Even more data is addressed through metadata. Data stored inline in CERA is currently archived in an Oracle Database which itself is transparently linked to a sophisticated HSM system. Within this HSM system a wide range of storage systems like different RAID and tape devices are used. HSM for CERA is configured in such a way, that a unique media failure may not cause any data loss. Correct and complete metadata are an important ingredient for relevant archiving procedures. Within CERA special care is taken to assure these goals. As a high end service DOIs can be assigned to data entities as they make the management of intellectual property easier and more convenient. Any data format can be served from within CERA although most of the data stored in CERA is in either GRIB or netCDF format. Out of the box CERA provides several data reduction mechanisms like time-slicing or regional selection. More elaborate functions like format conversion or diverse processing is attached either in- our outline. Fine-grained access control allows data to be distributed under divers data policies. Currently up to 800 users are registered within CERA. More than 600.000 downloads (255 terabytes) have been served from CERA within 2009. At present CERA is restructured, more specific details of the current implementation and the future development will be given in the presentation.

  3. Development of Gridded Innovations and Observations Supplement to MERRA-2

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Da Silva, Arlindo M.; Robertson, Franklin R.

    2017-01-01

    Atmospheric reanalysis have become an important source of data for weather and climate research, owing to the continuity of the data, but especially because of the multitude of observational data included (radiosondes, commercial aircraft, retrieved data products and radiances). However, the presence of assimilated observations can vary based on numerous factors, and so it is difficult or impossible for a researcher to say with any degree of certainty how many and what type of observations contributed to the reanalysis data they are using at any give point in time or space. For example, quality control, transmission interruptions, and station outages can occasionally affect data availability. While orbital paths can be known, drift in certain instruments and the large number of available instruments makes it challenging to know which satellite is observing any region at any point in the diurnal cycle. Furthermore, there is information from the statistics generated by the data assimilation that can help understand the model and the quality of the reanalysis. Typically, the assimilated observations and their innovations are in observation-space data formats and have not been made easily available to reanalysis users.A test data set has been developed to make the MERRA-2 assimilated observations available for rapid and general use, by simplifying the data format. The observations are binned to a grid similar as MERRA-2 and saved as netCDF. This data collection includes the mean and number of observations in the bin as well as its variance. The data will also include the innovations from the data assimilation, the forecast departure and the analysis increment, as well as bias correction (for satellite radiances). We refer to this proof-of-concept data as the MERRA-2 Gridded Innovations and Observations (GIO). In this paper, we present the data format and its strengths and limitations with some initial testing and validation of the methodology.

  4. Common Patterns with End-to-end Interoperability for Data Access

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Potter, N.; Jones, M. B.

    2010-12-01

    At first glance, using common storage formats and open standards should be enough to ensure interoperability between data servers and client applications, but that is often not the case. In the REAP (Realtime Environment for Analytical Processing; NSF #0619060) project we integrated access to data from OPeNDAP servers into the Kepler workflow system and found that, as in previous cases, we spent the bulk of our effort addressing the twin issues of data model compatibility and integration strategies. Implementing seamless data access between a remote data source and a client application (data sink) can be broken down into two kinds of issues. First, the solution must address any differences in the data models used by the data source (OPeNDAP) and the data sink (the Kepler workflow system). If these models match completely, there is little work to be done. However, that is rarely the case. To map OPeNDAP's data model to Kepler's, we used two techniques (ignoring trivial conversions): On-the-fly type mapping and out-of-band communication. Type conversion takes place both for data and metadata because Kepler requires a priori knowledge of some aspects (e.g., syntactic metadata) of the data to build a workflow. In addition, OPeNDAP's constraint expression syntax was used to send out-of-band information to restrict the data requested from the server, facilitating changes in the returned data's type. This technique provides a way for users to exert fine-grained control over the data request, a potentially useful technique, at the cost of requiring that users understand a little about the data source's processing capabilities. The second set of issues for end-to-end data access are integration strategies. OPeNDAP provides several different tools for bringing data into an application: C++, C and Java libraries that provide functions for newly written software; The netCDF library which enables existing applications to read from servers using an older interface; and simple file transfers. These options affect seamlessness in that they represent tradeoffs in new development (required for the first option) with cumbersome extra user actions (required by the last option). While the middle option, adding new functionality to an existing library (netCDF), is very appealing because practice has shown that it can be very effective over a wide range of clients, it's very hard to build these libraries because correctly writing a new implementation of an existing API that preserves the original's exact semantics can be a daunting task. In the example discussed here, we developed a new module for Kepler using OPeNDAP's Java API. This provided a way to leverage internal optimizations for data organization in Kepler and we felt that outweighed the additional cost of new development and the need for users to learn how to use a new Kepler module. While common storage formats and open standards play an important role in data access, our work with the Kepler workflow system reinforces the experience that matching the data models of the data server (source) and user client (sink) and choosing the most appropriate integration strategy are critical to achieving interoperability.

  5. figure1.nc

    EPA Pesticide Factsheets

    NetCDF file of the SREF standard deviation of wind speed and direction that was used to inject variability in the FDDA input.variable U_NDG_OLD contains standard deviation of wind speed (m/s)variable V_NDG_OLD contains the standard deviation of wind direction (deg)This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  6. Oceanotron, Scalable Server for Marine Observations

    NASA Astrophysics Data System (ADS)

    Loubrieu, T.; Bregent, S.; Blower, J. D.; Griffiths, G.

    2013-12-01

    Ifremer, French marine institute, is deeply involved in data management for different ocean in-situ observation programs (ARGO, OceanSites, GOSUD, ...) or other European programs aiming at networking ocean in-situ observation data repositories (myOcean, seaDataNet, Emodnet). To capitalize the effort for implementing advance data dissemination services (visualization, download with subsetting) for these programs and generally speaking water-column observations repositories, Ifremer decided to develop the oceanotron server (2010). Knowing the diversity of data repository formats (RDBMS, netCDF, ODV, ...) and the temperamental nature of the standard interoperability interface profiles (OGC/WMS, OGC/WFS, OGC/SOS, OpeNDAP, ...), the server is designed to manage plugins: - StorageUnits : which enable to read specific data repository formats (netCDF/OceanSites, RDBMS schema, ODV binary format). - FrontDesks : which get external requests and send results for interoperable protocols (OGC/WMS, OGC/SOS, OpenDAP). In between a third type of plugin may be inserted: - TransformationUnits : which enable ocean business related transformation of the features (for example conversion of vertical coordinates from pressure in dB to meters under sea surface). The server is released under open-source license so that partners can develop their own plugins. Within MyOcean project, University of Reading has plugged a WMS implementation as an oceanotron frontdesk. The modules are connected together by sharing the same information model for marine observations (or sampling features: vertical profiles, point series and trajectories), dataset metadata and queries. The shared information model is based on OGC/Observation & Measurement and Unidata/Common Data Model initiatives. The model is implemented in java (http://www.ifremer.fr/isi/oceanotron/javadoc/). This inner-interoperability level enables to capitalize ocean business expertise in software development without being indentured to specific data formats or protocols. Oceanotron is deployed at seven European data centres for marine in-situ observations within myOcean. While additional extensions are still being developed, to promote new collaborative initiatives, a work is now done on continuous and distributed integration (jenkins, maven), shared reference documentation (on alfresco) and code and release dissemination (sourceforge, github).

  7. Development of a gridded meteorological dataset over Java island, Indonesia 1985–2014

    PubMed Central

    Yanto; Livneh, Ben; Rajagopalan, Balaji

    2017-01-01

    We describe a gridded daily meteorology dataset consisting of precipitation, minimum and maximum temperature over Java Island, Indonesia at 0.125°×0.125° (~14 km) resolution spanning 30 years from 1985–2014. Importantly, this data set represents a marked improvement from existing gridded data sets over Java with higher spatial resolution, derived exclusively from ground-based observations unlike existing satellite or reanalysis-based products. Gap-infilling and gridding were performed via the Inverse Distance Weighting (IDW) interpolation method (radius, r, of 25 km and power of influence, α, of 3 as optimal parameters) restricted to only those stations including at least 3,650 days (~10 years) of valid data. We employed MSWEP and CHIRPS rainfall products in the cross-validation. It shows that the gridded rainfall presented here produces the most reasonable performance. Visual inspection reveals an increasing performance of gridded precipitation from grid, watershed to island scale. The data set, stored in a network common data form (NetCDF), is intended to support watershed-scale and island-scale studies of short-term and long-term climate, hydrology and ecology. PMID:28534871

  8. Comparing apples and oranges: the Community Intercomparison Suite

    NASA Astrophysics Data System (ADS)

    Schutgens, Nick; Stier, Philip; Kershaw, Philip; Pascoe, Stephen

    2015-04-01

    Visual representation and comparison of geoscientific datasets presents a huge challenge due to the large variety of file formats and spatio-temporal sampling of data (be they observations or simulations). The Community Intercomparison Suite attempts to greatly simplify these tasks for users by offering an intelligent but simple command line tool for visualisation and colocation of diverse datasets. In addition, CIS can subset and aggregate large datasets into smaller more manageable datasets. Our philosophy is to remove as much as possible the need for specialist knowledge by the user of the structure of a dataset. The colocation of observations with model data is as simple as: "cis col ::" which will resample the simulation data to the spatio-temporal sampling of the observations, contingent on a few user-defined options that specify a resampling kernel. As an example, we apply CIS to a case study of biomass burning aerosol from the Congo. Remote sensing observations, in-situe observations and model data are shown in various plots, with the purpose of either comparing different datasets or integrating them into a single comprehensive picture. CIS can deal with both gridded and ungridded datasets of 2, 3 or 4 spatio-temporal dimensions. It can handle different spatial coordinates (e.g. longitude or distance, altitude or pressure level). CIS supports both HDF, netCDF and ASCII file formats. The suite is written in Python with entirely publicly available open source dependencies. Plug-ins allow a high degree of user-moddability. A web-based developer hub includes a manual and simple examples. CIS is developed as open source code by a specialist IT company under supervision of scientists from the University of Oxford and the Centre of Environmental Data Archival as part of investment in the JASMIN superdatacluster facility.

  9. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.

  10. A polarimetric scattering database for non-spherical ice particles at microwave wavelengths

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Jiang, Zhiyuan; Aydin, Kultegin; Verlinde, Johannes; Clothiaux, Eugene E.; Botta, Giovanni

    2016-10-01

    The atmospheric science community has entered a period in which electromagnetic scattering properties at microwave frequencies of realistically constructed ice particles are necessary for making progress on a number of fronts. One front includes retrieval of ice-particle properties and signatures from ground-based, airborne, and satellite-based radar and radiometer observations. Another front is evaluation of model microphysics by application of forward operators to their outputs and comparison to observations during case study periods. Yet a third front is data assimilation, where again forward operators are applied to databases of ice-particle scattering properties and the results compared to observations, with their differences leading to corrections of the model state. Over the past decade investigators have developed databases of ice-particle scattering properties at microwave frequencies and made them openly available. Motivated by and complementing these earlier efforts, a database containing polarimetric single-scattering properties of various types of ice particles at millimeter to centimeter wavelengths is presented. While the database presented here contains only single-scattering properties of ice particles in a fixed orientation, ice-particle scattering properties are computed for many different directions of the radiation incident on them. These results are useful for understanding the dependence of ice-particle scattering properties on ice-particle orientation with respect to the incident radiation. For ice particles that are small compared to the wavelength, the number of incident directions of the radiation is sufficient to compute reasonable estimates of their (randomly) orientation-averaged scattering properties. This database is complementary to earlier ones in that it contains complete (polarimetric) scattering property information for each ice particle - 44 plates, 30 columns, 405 branched planar crystals, 660 aggregates, and 640 conical graupel - and direction of incident radiation but is limited to four frequencies (X-, Ku-, Ka-, and W-bands), does not include temperature dependencies of the single-scattering properties, and does not include scattering properties averaged over randomly oriented ice particles. Rules for constructing the morphologies of ice particles from one database to the next often differ; consequently, analyses that incorporate all of the different databases will contain the most variability, while illuminating important differences between them. Publication of this database is in support of future analyses of this nature and comes with the hope that doing so helps contribute to the development of a database standard for ice-particle scattering properties, like the NetCDF (Network Common Data Form) CF (Climate and Forecast) or NetCDF CF/Radial metadata conventions.

  11. Preserving Data for Renewable Energy

    NASA Astrophysics Data System (ADS)

    Macduff, M.; Sivaraman, C.

    2017-12-01

    The EERE Atmosphere to Electrons (A2e) program established the Data Archive and Portal (DAP) to ensure the long-term preservation and access to A2e research data. The DAP has been operated by PNNL for 2 years with data from more than a dozen projects and 1PB of data and hundreds of datasets expected to be stored this year. The data are a diverse mix of model runs, observational data, and dervived products. While most of the data is public, the DAP has securely stored many proprietary data sets provided by energy producers that are critical to the research goals of the A2e program. The DAP uses Amazon Web Services (AWS) and PNNL resources to provide long-term archival and access to the data with appropriate access controls. As a key element of the DAP, metadata are collected for each dataset to assist with data discovery and usefulness of the data. Further, the DAP has begun a process of standardizing observation data into NetCDF, which allows users to focus on the data instead of parsing the many formats. Creating a central repository that is in tune with the unique needs of the A2e research community is helping active tasks today as well as making many future research efforts possible. In this presentation, we provide an overview the DAP capabilities and benefits to the renewable energy community.

  12. A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Ansari, S.; Del Greco, S.

    2007-12-01

    The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.

  13. Data visualization in interactive maps and time series

    NASA Astrophysics Data System (ADS)

    Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe

    2014-05-01

    State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.

  14. Visualization of ocean forecast in BYTHOS

    NASA Astrophysics Data System (ADS)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  15. SGP and TWP (Manus) Ice Cloud Vertical Velocities

    DOE Data Explorer

    Kalesse, Heike

    2013-06-27

    Daily netcdf-files of ice-cloud dynamics observed at the ARM sites at SGP (Jan1997-Dec2010) and Manus (Jul1999-Dec2010). The files include variables at different time resolution (10s, 20min, 1hr). Profiles of radar reflectivity factor (dbz), Doppler velocity (vel) as well as retrieved vertical air motion (V_air) and reflectivity-weighted particle terminal fall velocity (V_ter) are given at 10s, 20min and 1hr resolution. Retrieved V_air and V_ter follow radar notation, so positive values indicate downward motion. Lower level clouds are removed, however a multi-layer flag is included.

  16. Figure5

    EPA Pesticide Factsheets

    This is an R statistics package script that allows the reproduction of Figure 5. The script includes the links to large NetCDF files that the figures access for O3, CO, wind speed, radiation and PBL height. It pulls the timeseries for each variable at a number of cities (lat-lon specified). This dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  17. SACR ADVance 3-D Cartesian Cloud Cover (SACR-ADV-3D3C) product

    DOE Data Explorer

    Meng Wang, Tami Toto, Eugene Clothiaux, Katia Lamer, Mariko Oue

    2017-03-08

    SACR-ADV-3D3C remaps the outputs of SACRCORR for cross-wind range-height indicator (CW-RHI) scans to a Cartesian grid and reports reflectivity CFAD and best estimate domain averaged cloud fraction. The final output is a single NetCDF file containing all aforementioned corrected radar moments remapped on a 3-D Cartesian grid, the SACR reflectivity CFAD, a profile of best estimate cloud fraction, a profile of maximum observable x-domain size (xmax), a profile time to horizontal distance estimate and a profile of minimum observable reflectivity (dBZmin).

  18. ESMPy and OpenClimateGIS: Python Interfaces for High Performance Grid Remapping and Geospatial Dataset Manipulation

    NASA Astrophysics Data System (ADS)

    O'Kuinghttons, Ryan; Koziol, Benjamin; Oehmke, Robert; DeLuca, Cecelia; Theurich, Gerhard; Li, Peggy; Jacob, Joseph

    2016-04-01

    The Earth System Modeling Framework (ESMF) Python interface (ESMPy) supports analysis and visualization in Earth system modeling codes by providing access to a variety of tools for data manipulation. ESMPy started as a Python interface to the ESMF grid remapping package, which provides mature and robust high-performance and scalable grid remapping between 2D and 3D logically rectangular and unstructured grids and sets of unconnected data. ESMPy now also interfaces with OpenClimateGIS (OCGIS), a package that performs subsetting, reformatting, and computational operations on climate datasets. ESMPy exposes a subset of ESMF grid remapping utilities. This includes bilinear, finite element patch recovery, first-order conservative, and nearest neighbor grid remapping methods. There are also options to ignore unmapped destination points, mask points on source and destination grids, and provide grid structure in the polar regions. Grid remapping on the sphere takes place in 3D Cartesian space, so the pole problem is not an issue as it can be with other grid remapping software. Remapping can be done between any combination of 2D and 3D logically rectangular and unstructured grids with overlapping domains. Grid pairs where one side of the regridding is represented by an appropriate set of unconnected data points, as is commonly found with observational data streams, is also supported. There is a developing interoperability layer between ESMPy and OpenClimateGIS (OCGIS). OCGIS is a pure Python, open source package designed for geospatial manipulation, subsetting, and computation on climate datasets stored in local NetCDF files or accessible remotely via the OPeNDAP protocol. Interfacing with OCGIS has brought GIS-like functionality to ESMPy (i.e. subsetting, coordinate transformations) as well as additional file output formats (i.e. CSV, ESRI Shapefile). ESMPy is distinguished by its strong emphasis on open source, community governance, and distributed development. The user base has grown quickly, and the package is integrating with several other software tools and frameworks. These include the Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT), Iris, PyFerret, cfpython, and the Community Surface Dynamics Modeling System (CSDMS). ESMPy minimum requirements include Python 2.6, Numpy 1.6.1 and an ESMF installation. Optional dependencies include NetCDF and OCGIS-related dependencies: GDAL, Shapely, and Fiona. ESMPy is regression tested nightly, and supported on Darwin, Linux and Cray systems with the GNU compiler suite and MPI communications. OCGIS is supported on Linux, and also undergoes nightly regression testing. Both packages are installable from Anaconda channels. Upcoming development plans for ESMPy involve development of a higher order conservative grid remapping method. Future OCGIS development will focus on mesh and location stream interoperability and streamlined access to ESMPy's MPI implementation.

  19. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  20. User-Friendly Data Servers for Climate Studies at the Asia-Pacific Data-Research Center (APDRC)

    NASA Astrophysics Data System (ADS)

    Yuan, G.; Shen, Y.; Zhang, Y.; Merrill, R.; Waseda, T.; Mitsudera, H.; Hacker, P.

    2002-12-01

    The APDRC was recently established within the International Pacific Research Center (IPRC) at the University of Hawaii. The APDRC mission is to increase understanding of climate variability in the Asia-Pacific region by developing the computational, data-management, and networking infrastructure necessary to make data resources readily accessible and usable by researchers, and by undertaking data-intensive research activities that will both advance knowledge and lead to improvements in data preparation and data products. A focus of recent activity is the implementation of user-friendly data servers. The APDRC is currently running a Live Access Server (LAS) developed at NOAA/PMEL to provide access to and visualization of gridded climate products via the web. The LAS also allows users to download the selected data subsets in various formats (such as binary, netCDF and ASCII). Most of the datasets served by the LAS are also served through our OPeNDAP server (formerly DODS), which allows users to directly access the data using their desktop client tools (e.g. GrADS, Matlab and Ferret). In addition, the APDRC is running an OPeNDAP Catalog/Aggregation Server (CAS) developed by Unidata at UCAR to serve climate data and products such as model output and satellite-derived products. These products are often large (> 2 GB) and are therefore stored as multiple files (stored separately in time or in parameters). The CAS remedies the inconvenience of multiple files and allows access to the whole dataset (or any subset that cuts across the multiple files) via a single request command from any DODS enabled client software. Once the aggregation of files is configured at the server (CAS), the process of aggregation is transparent to the user. The user only needs to know a single URL for the entire dataset, which is, in fact, stored as multiple files. CAS even allows aggregation of files on different systems and at different locations. Currently, the APDRC is serving NCEP, ECMWF, SODA, WOCE-Satellite, TMI, GPI and GSSTF products through the CAS. The APDRC is also running an EPIC server developed by PMEL/NOAA. EPIC is a web-based, data search and display system suited for in situ (station versus gridded) data. The process of locating and selecting individual station data from large collections (millions of profiles or time series, etc.) of in situ data is a major challenge. Serving in situ data on the Internet faces two problems: the irregularity of data formats; and the large quantity of data files. To solve the first problem, we have converted the in situ data into netCDF data format. The second problem was solved by using the EPIC server, which allows users to easily subset the files using a friendly graphical interface. Furthermore, we enhanced the capability of EPIC and configured OPeNDAP into EPIC to serve the numerous in situ data files and to export them to users through two different options: 1) an OPeNDAP pointer file of user-selected data files; and 2) a data package that includes meta-information (e.g., location, time, cruise no, etc.), a local pointer file, and the data files that the user selected. Option 1) is for those who do not want to download the selected data but want to use their own application software (such as GrADS, Matlab and Ferret) for access and analysis; option 2) is for users who want to store the data on their own system (e.g. laptops before going for a cruise) for subsequent analysis. Currently, WOCE CTD and bottle data, the WOCE current meter data, and some Argo float data are being served on the EPIC server.

  1. Implementing the HDF-EOS5 software library for data products in the UNAVCO InSAR archive

    NASA Astrophysics Data System (ADS)

    Baker, Scott; Meertens, Charles; Crosby, Christopher

    2017-04-01

    UNAVCO is a non-profit university-governed consortium that operates the U.S. National Science Foundation (NSF) Geodesy Advancing Geosciences and EarthScope (GAGE) facility and provides operational support to the Western North America InSAR Consortium (WInSAR). The seamless synthetic aperture radar archive (SSARA) is a seamless distributed access system for SAR data and higher-level data products. Under the NASA-funded SSARA project, a user-contributed InSAR archive for interferograms, time series, and other derived data products was developed at UNAVCO. The InSAR archive development has led to the adoption of the HDF-EOS5 data model, file format, and library. The HDF-EOS software library was designed to support NASA Earth Observation System (EOS) science data products and provides data structures for radar geometry (Swath) and geocoded (Grid) data based on the HDF5 data model and file format provided by the HDF Group. HDF-EOS5 inherits the benefits of HDF5 (open-source software support, internal compression, portability, support for structural data, self-describing file metadata enhanced performance, and xml support) and provides a way to standardize InSAR data products. Instrument- and datatype-independent services, such as subsetting, can be applied to files across a wide variety of data products through the same library interface. The library allows integration with GIS software packages such as ArcGIS and GDAL, conversion to other data formats like NetCDF and GeoTIFF, and is extensible with new data structures to support future requirements. UNAVCO maintains a GitHub repository that provides example software for creating data products from popular InSAR processing software packages like GMT5SAR and ISCE as well as examples for reading and converting the data products into other formats. Digital object identifiers (DOI) have been incorporated into the InSAR archive allowing users to assign a permanent location for their processed result and easily reference the final data products. A metadata attribute is added to the HDF-EOS5 file when a DOI is minted for a data product. These data products are searchable through the SSARA federated query providing access to processed data for both expert and non-expert InSAR users. The archive facilitates timely distribution of processed data with particular importance for geohazards and event response.

  2. The Arctic Observing Network (AON)Cooperative Arctic Data and Information Service (CADIS)

    NASA Astrophysics Data System (ADS)

    Moore, J.; Fetterer, F.; Middleton, D.; Ramamurthy, M.; Barry, R.

    2007-12-01

    The Arctic Observing Network (AON) is intended to be a federation of 34 land, atmosphere and ocean observation sites, some already operating and some newly funded by the U.S. National Science Foundation. This International Polar Year (IPY) initiative will acquire a major portion of the data coming from the interagency Study of Environmental Arctic Change (SEARCH). AON will succeed in supporting the science envisioned by its planners only if it functions as a system and not as a collection of independent observation programs. Development and implementation of a comprehensive data management strategy will key a key to the success of this effort. AON planners envision an ideal data management system that includes a portal through which scientists can submit metadata and datasets at a single location; search the complete archive and find all data relevant to a location or process; all data have browse imagery and complete documentation; time series or fields can be plotted on line, and all data are in a relational database so that multiple data sets and sources can be queried and retrieved. The Cooperative Arctic Data and Information Service (CADIS) will provide near-real-time data delivery, a long-term repository for data, a portal for data discovery, and tools to manipulate data by building on existing tools like the Unidata Integrated Data Viewer (IDV). Our approach to the data integration challenge is to start by asking investigators to provide metadata via a general purpose user interface. An entry tool assists PIs in writing metadata and submitting data. Data can be submitted to the archive in NetCDF with Climate and Forecast conventions or in one of several other standard formats where possible. CADIS is a joint effort of the University Corporation for Atmospheric Research (UCAR), the National Snow and Ice Data Center (NSIDC), and the National Center for Atmospheric Research (NCAR). In the first year, we are concentrating on establishing metadata protocols that are compatible with international standards, and on demonstrating data submission, search and visualization tools with a subset of AON data. These capabilities will be expanded in years 2 and 3. By working with AON investigators and by using evolving conventions for in situ data formats as they mature, we hope to bring CADIS to the full level of data integration imagined by AON planners. The CADIS development will be described in terms of challenges, implementation strategies and progress to date. The developers are making a conscious effort to integrate this system and its data holdings with the complementary efforts in the SEARCH and IPY programs. The interdisciplinary content of the data, the variations in format and documentation, as well as its geographic coverage across the Arctic Basin all impact the form and effectiveness of the CADIS system architecture. The clever solutions to the complexity of implementing a comprehensive data management strategy implied in this diversity will be a focus of the presentation.

  3. The Ocean Observatories Initiative: Data Access and Visualization via the Graphical User Interface

    NASA Astrophysics Data System (ADS)

    Garzio, L. M.; Belabbassi, L.; Knuth, F.; Smith, M. J.; Crowley, M. F.; Vardaro, M.; Kerfoot, J.

    2016-02-01

    The Ocean Observatories Initiative (OOI), funded by the National Science Foundation, is a broad-scale, multidisciplinary effort to transform oceanographic research by providing users with real-time access to long-term datasets from a variety of deployed physical, chemical, biological, and geological sensors. The global array component of the OOI includes four high latitude sites: Irminger Sea off Greenland, Station Papa in the Gulf of Alaska, Argentine Basin off the coast of Argentina, and Southern Ocean near coordinates 55°S and 90°W. Each site is composed of fixed moorings, hybrid profiler moorings and mobile assets, with a total of approximately 110 instruments at each site. Near real-time (telemetered) and recovered data from these instruments can be visualized and downloaded via the OOI Graphical User Interface. In this Interface, the user can visualize scientific parameters via six different plotting functions with options to specify time ranges and apply various QA/QC tests. Data streams from all instruments can also be downloaded in different formats (CSV, JSON, and NetCDF) for further data processing, visualization, and comparison to supplementary datasets. In addition, users can view alerts and alarms in the system, access relevant metadata and deployment information for specific instruments, and find infrastructure specifics for each array including location, sampling strategies, deployment schedules, and technical drawings. These datasets from the OOI provide an unprecedented opportunity to transform oceanographic research and education, and will be readily accessible to the general public via the OOI's Graphical User Interface.

  4. A Lightweight Remote Parallel Visualization Platform for Interactive Massive Time-varying Climate Data Analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhang, T.; Huang, Q.; Liu, Q.

    2014-12-01

    Today's climate datasets are featured with large volume, high degree of spatiotemporal complexity and evolving fast overtime. As visualizing large volume distributed climate datasets is computationally intensive, traditional desktop based visualization applications fail to handle the computational intensity. Recently, scientists have developed remote visualization techniques to address the computational issue. Remote visualization techniques usually leverage server-side parallel computing capabilities to perform visualization tasks and deliver visualization results to clients through network. In this research, we aim to build a remote parallel visualization platform for visualizing and analyzing massive climate data. Our visualization platform was built based on Paraview, which is one of the most popular open source remote visualization and analysis applications. To further enhance the scalability and stability of the platform, we have employed cloud computing techniques to support the deployment of the platform. In this platform, all climate datasets are regular grid data which are stored in NetCDF format. Three types of data access methods are supported in the platform: accessing remote datasets provided by OpenDAP servers, accessing datasets hosted on the web visualization server and accessing local datasets. Despite different data access methods, all visualization tasks are completed at the server side to reduce the workload of clients. As a proof of concept, we have implemented a set of scientific visualization methods to show the feasibility of the platform. Preliminary results indicate that the framework can address the computation limitation of desktop based visualization applications.

  5. The Climate Data Analytic Services (CDAS) Framework.

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; Duffy, D.

    2016-12-01

    Faced with unprecedented growth in climate data volume and demand, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute data processing workflows combining common analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted climate data analysis tools (ESMF, CDAT, NCO, etc.). A dynamic caching architecture enables interactive response times. CDAS utilizes Apache Spark for parallelization and a custom array framework for processing huge datasets within limited memory spaces. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using either direct web service calls, a python script, a unix-like shell client, or a javascript-based web application. Client packages in python, scala, or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends and variability, and compare multiple reanalysis datasets.

  6. Near Real-Time Collection, Processing, and Publication of Beach Morphology and Oceanographic LIDAR Data

    NASA Astrophysics Data System (ADS)

    Dyer, T.; Brodie, K. L.; Spore, N.

    2016-02-01

    Modern LIDAR systems, while capable of providing highly accurate and dense datasets, introduce significant challenges in data processing and end-user accessibility. At the United States Army Corps of Engineers Field Research Facility in Duck, North Carolina, we have developed a stationary LIDAR tower for the continuous monitoring of the ocean, beach, and foredune, as well as an automated workflow capable of providing scientific data products from the LIDAR scanner in near real-time through an online data portal. The LIDAR performs hourly scans, taking approximately 50 minutes to complete and producing datasets on the order of 1GB. Processing of the LIDAR data includes coordinate transformations, data rectification and coregistration, filtering to remove noise and unwanted objects, gridding, and time-series analysis to generate products for use by end-users. Examples of these products include water levels and significant wave heights, virtual wave gauge time-series and FFTs, wave runup, foreshore elevations and slopes, and bare earth DEMs. Immediately after processing, data products are combined with ISO compliant metadata and stored using the NetCDF-4 file format, making them easily discoverable through a web portal which provides an interactive map that allows users to explore datasets both spatially and temporally. End-users can download datasets in user-defined time intervals, which can be used, for example, as forcing or validation parameters in numerical models. Funded by the USACE Coastal Ocean Data Systems Program.

  7. Polar2Grid 2.0: Reprojecting Satellite Data Made Easy

    NASA Astrophysics Data System (ADS)

    Hoese, D.; Strabala, K.

    2015-12-01

    Polar-orbiting multi-band meteorological sensors such as those on the Suomi National Polar-orbiting Partnership (SNPP) satellite pose substantial challenges for taking imagery the last mile to forecast offices, scientific analysis environments, and the general public. To do this quickly and easily, the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin has created an open-source, modular application system, Polar2Grid. This bundled solution automates tools for converting various satellite products like those from VIIRS and MODIS into a variety of output formats, including GeoTIFFs, AWIPS compatible NetCDF files, and NinJo forecasting workstation compatible TIFF images. In addition to traditional visible and infrared imagery, Polar2Grid includes three perceptual enhancements for the VIIRS Day-Night Band (DNB), as well as providing the capability to create sharpened true color, sharpened false color, and user-defined RGB images. Polar2Grid performs conversions and projections in seconds on large swaths of data. Polar2Grid is currently providing VIIRS imagery over the Continental United States, as well as Alaska and Hawaii, from various Direct-Broadcast antennas to operational forecasters at the NOAA National Weather Service (NWS) offices in their AWIPS terminals, within minutes of an overpass of the Suomi NPP satellite. Three years after Polar2Grid development started, the Polar2Grid team is now releasing version 2.0 of the software; supporting more sensors, generating more products, and providing all of its features in an easy to use command line interface.

  8. A Shallow Layer Approach for Geo-flow emplacement

    NASA Astrophysics Data System (ADS)

    Costa, A.; Folch, A.; Mecedonio, G.

    2009-04-01

    Geophysical flows such as lahars or lava flows severely threat the communities located on or near the volcano flanks. Risks and damages caused by the propagation of this kind of flows require a quantitative description of this phenomenon and reliable tools for forecasting their emplacement. Computational models are a valuable tool for planning risk mitigation countermeasures, such as human intervention to force flow diversion, artificial barriers, and allow for significant economical and social benefits. A FORTRAN 90 code based on a Shallow Layer Approach for Geo-flows (SLAG) for describing transport and emplacement of diluted lahars, water and lava was developed in both serial and parallel version. Three rheological models, such as those describing i) a viscous, ii) a turbulent, and iii) a dilatant flow respectively, were implemented in order to describe transport of lavas, water and diluted lahars. The code was made user-friendly by creating some interfaces that allow the user to easily define the problem, extract and interpolate the topography of the simulation domain. Moreover SLAG outputs can be written in both GRD format (e.g., Surfer), NetCDF format, or visualized directly in GoogleEarth. In SLAG the governing equations were treated using a Godunov splitting method following George (2008) algorithm based on a Riemann solver for the shallow water equations that decomposes an augmented state variable the depth, momentum, momentum flux, and bathymetry into four propagating discontinuities or waves. For our application, the algorithm was generalized for solving the energy equation. For validating the code in simulating real geophysical flows, we performed few simulations the lava flow event of the the 3rd and 4th January 1992 Etna eruption, the July 2001 Etna lava flows, January 2002 Nyragongo lava flows and few test cases for simulating transport of diluted lahars. Ref: George, D.L. (2008), Augmented Riemann Solvers for the Shallow Water Equations over Variable Topography with Steady States and Inundation, J. Comput. Phys., 227 (6), 3089-3113, doi:10.1016/j.jcp.2007.10.027.

  9. Web Services as Building Blocks for an Open Coastal Observing System

    NASA Astrophysics Data System (ADS)

    Breitbach, G.; Krasemann, H.

    2012-04-01

    In coastal observing systems it is needed to integrate different observing methods like remote sensing, in-situ measurements, and models into a synoptic view of the state of the observed region. This integration can be based solely on web services combining data and metadata. Such an approach is pursued for COSYNA (Coastal Observing System for Northern and Artic seas). Data from satellite and radar remote sensing, measurements of buoys, stations and Ferryboxes are the observation part of COSYNA. These data are assimilated into models to create pre-operational forecasts. For discovering data an OGC Web Feature Service (WFS) is used by the COSYNA data portal. This Web Feature Service knows the necessary metadata not only for finding data, but in addition the URLs of web services to view and download the data. To make the data from different resources comparable a common vocabulary is needed. For COSYNA the standard names from CF-conventions are stored within the metadata whenever possible. For the metadata an INSPIRE and ISO19115 compatible data format is used. The WFS is fed from the metadata-system using database-views. Actual data are stored in two different formats, in NetCDF-files for gridded data and in an RDBMS for time-series-like data. The web service URLs are mostly standard based the standards are mainly OGC standards. Maps were created from netcdf files with the help of the ncWMS tool whereas a self-developed java servlet is used for maps of moving measurement platforms. In this case download of data is offered via OGC SOS. For NetCDF-files OPeNDAP is used for the data download. The OGC CSW is used for accessing extended metadata. The concept of data management in COSYNA will be presented which is independent of the special services used in COSYNA. This concept is parameter and data centric and might be useful for other observing systems.

  10. SCIAMACHY: The new Level 0-1 Processor

    NASA Astrophysics Data System (ADS)

    Lichtenberg, Günter; Slijkhuis, Sander; Aberle, Bernd; Sherbakov, Denis; Meringer, Markus; Noel, Stefan; Bramstedt, Klaus; Liebing, Patricia; Bovensmann, Heinrich; Snel, Ralph; Krijger, Mathijs; van Hees, Richard; van der Meer, Pieter; Lerot, Christophe; Dehn, Angelika; Fehr, Thorsten

    2016-04-01

    SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY) is a scanning nadir and limb spectrometer covering the wavelength range from 212 nm to 2386 nm in 8 channels. It is a joint project of Germany, the Netherlands and Belgium and was launched in February 2002 on the ENVISAT platform. After the platform failure in April 2012, SCIAMACHY is now in the postprocessing phase F. SCIAMACHYs originally specified in-orbit lifetime was double the planned lifetime. SCIAMACHY was designed to measure column densities and vertical profiles of trace gas species in the mesosphere, in the stratosphere and in the troposphere (Bovensmann et al., 1999). It can detect O3 , H2CO, SO2 , BrO, OClO, NO2 , H2 O, CO, CO2 , CH4 , N2 O , O2 , (O2)2 and can provide information about aerosols and clouds. The operational processing of SCIAMACHY is split into Level 0-1 processing (essentially providing calibrated radiances) and Level 1-2 processing providing geophysical products. The operational Level 0-1 processor has been completely re-coded and embedded in a newly developed framework that speeds up processing considerably. Currently Version 9 of the Level 0-1 processor is implemented. It will include - An updated degradation correction - Several improvements in the SWIR spectral range like a better dark correction, an improved dead & bad pixel characterisation and an improved spectral calibration - Improvements to the polarisation correction algorithm - Improvements to the geolocation by a better pointing characterisation Additionally a new format for the Level 1b and Level 1c will be implemented. The version 9 products will be available in netCDF version 4 that is aligned with the formats of the GOME-1 and Sentinel missions. We will present the first results of the new Level 0-1 processing in this paper.

  11. Discovering New Global Climate Patterns: Curating a 21-Year High Temporal (Hourly) and Spatial (40km) Resolution Reanalysis Dataset

    NASA Astrophysics Data System (ADS)

    Hou, C. Y.; Dattore, R.; Peng, G. S.

    2014-12-01

    The National Center for Atmospheric Research's Global Climate Four-Dimensional Data Assimilation (CFDDA) Hourly 40km Reanalysis dataset is a dynamically downscaled dataset with high temporal and spatial resolution. The dataset contains three-dimensional hourly analyses in netCDF format for the global atmospheric state from 1985 to 2005 on a 40km horizontal grid (0.4°grid increment) with 28 vertical levels, providing good representation of local forcing and diurnal variation of processes in the planetary boundary layer. This project aimed to make the dataset publicly available, accessible, and usable in order to provide a unique resource to allow and promote studies of new climate characteristics. When the curation project started, it had been five years since the data files were generated. Also, although the Principal Investigator (PI) had generated a user document at the end of the project in 2009, the document had not been maintained. Furthermore, the PI had moved to a new institution, and the remaining team members were reassigned to other projects. These factors made data curation in the areas of verifying data quality, harvest metadata descriptions, documenting provenance information especially challenging. As a result, the project's curation process found that: Data curator's skill and knowledge helped make decisions, such as file format and structure and workflow documentation, that had significant, positive impact on the ease of the dataset's management and long term preservation. Use of data curation tools, such as the Data Curation Profiles Toolkit's guidelines, revealed important information for promoting the data's usability and enhancing preservation planning. Involving data curators during each stage of the data curation life cycle instead of at the end could improve the curation process' efficiency. Overall, the project showed that proper resources invested in the curation process would give datasets the best chance to fulfill their potential to help with new climate pattern discovery.

  12. Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

    NASA Astrophysics Data System (ADS)

    Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

    2012-12-01

    The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L. A., Killeffer, T. S., Boden, T. A., ... & Lazer, K. (2014). THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA. Oak Ridge National Laboratory (ORNL).

  13. Standardised online data access and publishing for Earth Systems and Climate data in Australia

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Druken, K. A.; Trenham, C.; Wang, J.; Wyborn, L. A.; Smillie, J.; Allen, C.; Porter, D.

    2015-12-01

    The National Computational Infrastructure (NCI) hosts Australia's largest repository (10+ PB) of research data collections spanning a wide range of fields from climate, coasts, oceans, and geophysics through to astronomy, bioinformatics, and the social sciences. Spatial scales range from global to local ultra-high resolution, requiring storage volumes from MB to PB. The data have been organised to be highly connected to both the NCI HPC and cloud resources (e.g., interactive visualisation and analysis environments). Researchers can login to utilise the high performance infrastructure for these data collections, or access the data via standards-based web services. Our aim is to provide a trusted platform to support interdisciplinary research across all the collections as well as services for use of the data within individual communities. We thus cater to a wide range of researcher needs, whilst needing to maintain a consistent approach to data management and publishing. All research data collections hosted at NCI are governed by a data management plan, prior to being published through a variety of platforms and web services such as OPeNDAP, HTTP, and WMS. The data management plan ensures the use of standard formats (when available) that comply with relevant data conventions (e.g., CF-Convention) and metadata standards (e.g., ISO19115). Digital Object Identifiers (DOIs) can be minted at NCI and assigned to datasets and collections. Large scale data growth and use in a variety of research fields has led to a rise in, and acceptance of, open spatial data formats such as NetCDF4/HDF5, prompting a need to extend these data conventions to fields such as geophysics and satellite Earth observations. The fusion of DOI-minted data that is discoverable and accessible via metadata and web services, creates a complete picture of data hosting, discovery, use, and citation. This enables standardised and reproducible data analysis.

  14. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Form and format for... And/or Amino Acid Sequences § 1.824 Form and format for nucleotide and/or amino acid sequence... Code for Information Interchange (ASCII) text. No other formats shall be allowed. (3) The computer...

  15. Adding Data Management Services to Parallel File Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less

  16. Best Practices for Preparing Interoperable Geospatial Data

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Santhana Vannan, S.; Cook, R. B.; Wilson, B. E.; Beaty, T. W.

    2010-12-01

    Geospatial data is critically important for a wide scope of research and applications: carbon cycle and ecosystem, climate change, land use and urban planning, environmental protecting, etc. Geospatial data is created by different organizations using different methods, from remote sensing observations, field surveys, model simulations, etc., and stored in various formats. So geospatial data is diverse and heterogeneous, which brings a huge barrier for the sharing and using of geospatial data, especially when targeting a broad user community. Many efforts have been taken to address different aspects of using geospatial data by improving its interoperability. For example, the specification for Open Geospatial Consortium (OGC) catalog services defines a standard way for geospatial information discovery; OGC Web Coverage Services (WCS) and OPeNDAP define interoperable protocols for geospatial data access, respectively. But the reality is that only having the standard mechanisms for data discovery and access is not enough. The geospatial data content itself has to be organized in standard, easily understandable, and readily usable formats. The Oak Ridge National Lab Distributed Archived Data Center (ORNL DAAC) archives data and information relevant to biogeochemical dynamics, ecological data, and environmental processes. The Modeling and Synthesis Thematic Data Center (MAST-DC) prepares and distributes both input data and output data of carbon cycle models and provides data support for synthesis and terrestrial model inter-comparison in multi-scales. Both of these NASA-funded data centers compile and distribute a large amount of diverse geospatial data and have broad user communities, including GIS users, Earth science researchers, and ecosystem modeling teams. The ORNL DAAC and MAST-DC address this geospatial data interoperability issue by standardizing the data content and feeding them into a well-designed Spatial Data Infrastructure (SDI) which provides interoperable mechanisms to advertise, visualize, and distribute the standardized geospatial data. In this presentation, we summarize the experiences learned and the best practices for geospatial data standardization. The presentation will describe how diverse and historical data archived in the ORNL DAAC were converted into standard and non-proprietary formats; what tools were used to make the conversion; how the spatial and temporal information are properly captured in a consistent manor; how to name a data file or a variable to make it both human-friendly and semantically interoperable; how NetCDF file format and CF convention can promote the data usage in ecosystem modeling user community; how those standardized geospatial data can be fed into OGC Web Services to support on-demand data visualization and access; and how the metadata should be collected and organized so that they can be discovered through standard catalog services.

  17. 17 CFR 145.7 - Requests for Commission records and copies thereof.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... form or format (including electronic formats) of the response. The Commission will accommodate... not specify the form or format of the response, the Commission will respond in the form or format in... response will be given to requests for records that are directed to the Commission other than in the manner...

  18. Massive Star Formation Viewed through Extragalactic-Tinted Glasses

    NASA Astrophysics Data System (ADS)

    Willis, Sarah; Marengo, M.; Smith, H. A.; Allen, L.

    2014-01-01

    Massive Galactic star forming regions are the local analogs to the luminous star forming regions that dominate the emission from star forming galaxies. Their proximity to us enables the characterization of the full range of stellar masses that form in these more massive environments, improving our understanding of star formation tracers used in extragalactic studies. We have surveyed a sample of massive star forming regions with a range of morphologies and luminosities to probe the star formation activity in a variety of environments. We have used Spitzer IRAC and deep ground based J, H, Ks observations to characterize the Young Stellar Object (YSO) content of 6 massive star forming regions. These YSOs provide insight into the rate and efficiency of star formation within these regions, and enable comparison with nearby, low mass star forming regions as well as extreme cases of Galactic star formation including ‘mini-starburst’ regions. In addition, we have conducted an in-depth analysis of NGC 6334 to investigate how the star formation activity varies within an individual star forming region, using Herschel data in the far-infrared to probe the earliest stages of the ongoing star formation activity.

  19. Interoperable Access to Near Real Time Ocean Observations with the Observing System Monitoring Center

    NASA Astrophysics Data System (ADS)

    O'Brien, K.; Hankin, S.; Mendelssohn, R.; Simons, R.; Smith, B.; Kern, K. J.

    2013-12-01

    The Observing System Monitoring Center (OSMC), a project funded by the National Oceanic and Atmospheric Administration's Climate Observations Division (COD), exists to join the discrete 'networks' of In Situ ocean observing platforms -- ships, surface floats, profiling floats, tide gauges, etc. - into a single, integrated system. The OSMC is addressing this goal through capabilities in three areas focusing on the needs of specific user groups: 1) it provides real time monitoring of the integrated observing system assets to assist management in optimizing the cost-effectiveness of the system for the assessment of climate variables; 2) it makes the stream of real time data coming from the observing system available to scientific end users into an easy-to-use form; and 3) in the future, it will unify the delayed-mode data from platform-focused data assembly centers into a standards- based distributed system that is readily accessible to interested users from the science and education communities. In this presentation, we will be focusing on the efforts of the OSMC to provide interoperable access to the near real time data stream that is available via the Global Telecommunications System (GTS). This is a very rich data source, and includes data from nearly all of the oceanographic platforms that are actively observing. We will discuss how the data is being served out using a number of widely used 'web services' (including OPeNDAP and SOS) and downloadable file formats (KML, csv, xls, netCDF), so that it can be accessed in web browsers and popular desktop analysis tools. We will also be discussing our use of the Environmental Research Division's Data Access Program (ERDDAP), available from NOAA/NMFS, which has allowed us to achieve our goals of serving the near real time data. From an interoperability perspective, it's important to note that access to the this stream of data is not just for humans, but also for machine-to-machine requests. We'll also delve into how we configured access to the near real time ocean observations in accordance with the Climate and Forecast (CF) metadata conventions describing the various 'feature types' associated with particular in situ observation types, or discrete sampling geometries (DSG). Wrapping up, we'll discuss some of the ways this data source is already being used.

  20. Sea ice in the Baltic Sea - revisiting BASIS ice, a historical data set covering the period 1960/1961-1978/1979

    NASA Astrophysics Data System (ADS)

    Löptien, U.; Dietze, H.

    2014-12-01

    The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science, PANGAEA (doi:10.1594/PANGAEA.832353).

  1. A new CM SAF Solar Surface Radiation Climate Data Set derived from Meteosat Satellite Observations

    NASA Astrophysics Data System (ADS)

    Trentmann, J.; Mueller, R. W.; Pfeifroth, U.; Träger-Chatterjee, C.; Cremer, R.

    2014-12-01

    The incoming surface solar radiation has been defined as an essential climate variable by GCOS. It is mandatory to monitor this part of the earth's energy balance, and thus gain insights on the state and variability of the climate system. In addition, data sets of the surface solar radiation have received increased attention over the recent years as an important source of information for the planning of solar energy applications. The EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF) is deriving surface solar radiation from geostationary and polar-orbiting satellite instruments. While CM SAF is focusing on the generation of high-quality long-term climate data records, also operationally data is provided in short time latency within 8 weeks. Here we present SARAH (Solar Surface Radiation Dataset - Heliosat), i.e. the new CM SAF Solar Surface Radiation data set based on Meteosat satellite observations. SARAH provides instantaneous, daily- and monthly-averaged data of the effective cloud albedo (CAL), the direct normalized solar radiation (DNI) and the solar irradiance (SIS) from 1983 to 2013 for the full view of the Meteosat satellite (i.e, Europe, Africa, parts of South America, and the Atlantic ocean). The data sets are generated with a high spatial resolution of 0.05 deg allowing for detailed regional studies, and are available in netcdf-format at no cost without restrictions at www.cmsaf.eu. We provide an overview of the data sets, including a validation against reference measurements from the BSRN and GEBA surface station networks.

  2. Development of hi-resolution regional climate scenarios in Japan by statistical downscaling

    NASA Astrophysics Data System (ADS)

    Dairaku, K.

    2016-12-01

    Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. To meet with the needs of stakeholders such as local governments, a Japan national project, Social Implementation Program on Climate Change Adaptation Technology (SI-CAT), launched in December 2015. It develops reliable technologies for near-term climate change predictions. Multi-model ensemble regional climate scenarios with 1km horizontal grid-spacing over Japan are developed by using CMIP5 GCMs and a statistical downscaling method to support various municipal adaptation measures appropriate for possible regional climate changes. A statistical downscaling method, Bias Correction Spatial Disaggregation (BCSD), is employed to develop regional climate scenarios based on CMIP5 RCP8.5 five GCMs (MIROC5, MRI-CGCM3, GFDL-CM3, CSIRO-Mk3-6-0, HadGEM2-ES) for the periods of historical climate (1970-2005) and near future climate (2020-2055). Downscaled variables are monthly/daily precipitation and temperature. File format is NetCDF4 (conforming to CF1.6, HDF5 compression). Developed regional climate scenarios will be expanded to meet with needs of stakeholders and interface applications to access and download the data are under developing. Statistical downscaling method is not necessary to well represent locally forced nonlinear phenomena, extreme events such as heavy rain, heavy snow, etc. To complement the statistical method, dynamical downscaling approach is also combined and applied to some specific regions which have needs of stakeholders. The added values of statistical/dynamical downscaling methods compared with parent GCMs are investigated.

  3. Method for loading explosive laterally from a borehole

    DOEpatents

    Ricketts, Thomas E.

    1981-01-01

    There is provided a method for forming an in situ oil shale retort in a subterranean formation containing oil shale. At least one void is excavated in the formation, leaving zones of unfragmented formation adjacent the void. An array of main blastholes is formed in the zone of unfragmented formation and at least one explosive charge which is shaped for forming a high velocity gas jet is placed into a main blasthole with the axis of the gas jet extending transverse to the blasthole. The shaped charge is detonated for forming an auxiliary blasthole in the unfragmented formation adjacent a side wall of the main blasthole. The auxiliary blasthole extends laterally away from the main blasthole. Explosive is placed into the main blasthole and into the auxiliary blasthole and is detonated for explosively expanding formation towards the free face for forming a fragmented permeable mass of formation particles in the in situ oil shale retort.

  4. New Developments in the SCIAMACHY L2 Ground Processor

    NASA Astrophysics Data System (ADS)

    Gretschany, Sergei; Lichtenberg, Günter; Meringer, Markus; Theys, Nicolas; Lerot, Christophe; Liebing, Patricia; Noel, Stefan; Dehn, Angelika; Fehr, Thorsten

    2016-04-01

    SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric ChartographY) aboard ESA's environmental satellite ENVISAT observed the Earth's atmosphere in limb, nadir, and solar/lunar occultation geometries covering the UV-Visible to NIR spectral range. It is a joint project of Germany, the Netherlands and Belgium and was launched in February 2002. SCIAMACHY doubled its originally planned in-orbit lifetime of five years before the communication to ENVISAT was severed in April 2012, and the mission entered its post-operational phase. In order to preserve the best quality of the outstanding data recorded by SCIAMACHY, data processors are still being updated. This presentation will highlight three new developments that are currently being incorporated into the forthcoming Version 7 of ESA's operational Level 2 processor: 1. Tropospheric BrO, a new retrieval based on the scientific algorithm of (Theys et al., 2011). This algorithm had been originally developed for the GOME-2 sensor and later adapted for SCIAMACHY. The main principle of the new algorithm is to utilize BrO total columns (already an operational product) and split them into stratospheric VCDstrat and tropospheric VCDtrop fractions. BrO VCDstrat is determined from a climatological approach, driven by SCIAMACHY O3 and NO2 observations. VCDtrop is then determined simply as a difference: VCDtrop = VCDtotal - VCDstrat. 2. Improved cloud flagging using limb measurements (Liebing, 2015). Limb cloud flags are already part of the SCIAMACHY L2 product. They are currently calculated employing the scientific algorithm developed by (Eichmann et al., 2015). Clouds are categorized into four types: water, ice, polar stratospheric and noctilucent clouds. High atmospheric aerosol loadings, however, often lead to spurious cloud flags, when aerosols had been misidentified as clouds. The new algorithm will better discriminate between aerosol and clouds. It will also have a higher sensitivity w.r.t. thin clouds. 3. A new, future-proof file format for the level 2 product based on NetCDF. Although the final concept for the new format is still under discussion within the SCIAMACHY Quality Working Group, main features of the new format have already been clarified. The data format should be aligned and harmonized with other missions (esp. Sentinels and GOME-1). Splitting of the L2 products into profile and column products is also considered. Additionally, reading routines for the new formats will be developed and provided. References: K.-U. Eichmann et al., Global cloud top height retrieval using SCIAMACHY limb spectra: model studies and first results, Atmos. Meas. Tech. Discuss., 8, 8295-8352, 2015. P. Liebing, New Limb Cloud Detection Algorithm Theoretical Basis Document, 2015. N. Theys et al., Global observations of tropospheric BrO columns using GOME-2 satellite data, Atmos. Chem. Phys., 11, 1791-1811, 2011.

  5. Figure12

    EPA Pesticide Factsheets

    NCL script: cmaq_ensemble_isam_4panels_subdomain.nclNetcdf input file for NCL script, containing ensemble means and standard deviation of ISAM SO4 and O3 contributions from IPM: test.ncPlot (ps): maps_isam_mean_std_lasthour_ipm_so4_o3_east.psPlot (pdf): maps_isam_mean_std_lasthour_ipm_so4_o3_east.pdfPlot (ncgm): maps_isam_mean_std_lasthour_ipm_so4_o3_east.ncgmThis dataset is associated with the following publication:Gilliam , R., C. Hogrefe , J. Godowitch, S. Napelenok , R. Mathur , and S.T. Rao. Impact of inherent meteorology uncertainty on air quality model predictions. JOURNAL OF GEOPHYSICAL RESEARCH-ATMOSPHERES. American Geophysical Union, Washington, DC, USA, 120(23): 12,259–12,280, (2015).

  6. Latest developments for the IAGOS database: Interoperability and metadata

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Schultz, Martin; van Velthoven, Peter; Broetz, Bjoern; Rauthe-Schöch, Armin; Brissebrat, Guillaume

    2014-05-01

    In-service Aircraft for a Global Observing System (IAGOS, http://www.iagos.org) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. Users can access the data through the following web sites: http://www.iagos.fr or http://www.pole-ether.fr as the IAGOS database is part of the French atmospheric chemistry data centre ETHER (CNES and CNRS). The database is in continuous development and improvement. In the framework of the IGAS project (IAGOS for GMES/COPERNICUS Atmospheric Service), major achievements will be reached, such as metadata and format standardisation in order to interoperate with international portals and other databases, QA/QC procedures and traceability, CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data integration within the central database, and the real-time data transmission. IGAS work package 2 aims at providing the IAGOS data to users in a standardized format including the necessary metadata and information on data processing, data quality and uncertainties. We are currently redefining and standardizing the IAGOS metadata for interoperable use within GMES/Copernicus. The metadata are compliant with the ISO 19115, INSPIRE and NetCDF-CF conventions. IAGOS data will be provided to users in NetCDF or NASA Ames format. We also are implementing interoperability between all the involved IAGOS data services, including the central IAGOS database, the former MOZAIC and CARIBIC databases, Aircraft Research DLR database and the Jülich WCS web application JOIN (Jülich OWS Interface) which combines model outputs with in situ data for intercomparison. The optimal data transfer protocol is being investigated to insure the interoperability. To facilitate satellite and model validation, tools will be made available for co-location and comparison with IAGOS. We will enhance the JOIN application in order to properly display aircraft data as vertical profiles and along individual flight tracks and to allow for graphical comparison to model results that are accessible through interoperable web services, such as the daily products from the GMES/Copernicus atmospheric service.

  7. Integrating Test-Form Formatting into Automated Test Assembly

    ERIC Educational Resources Information Center

    Diao, Qi; van der Linden, Wim J.

    2013-01-01

    Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…

  8. KEY ISSUES REVIEW: Insights from simulations of star formation

    NASA Astrophysics Data System (ADS)

    Larson, Richard B.

    2007-03-01

    Although the basic physics of star formation is classical, numerical simulations have yielded essential insights into how stars form. They show that star formation is a highly nonuniform runaway process characterized by the emergence of nearly singular peaks in density, followed by the accretional growth of embryo stars that form at these density peaks. Circumstellar discs often form from the gas being accreted by the forming stars, and accretion from these discs may be episodic, driven by gravitational instabilities or by protostellar interactions. Star-forming clouds typically develop filamentary structures, which may, along with the thermal physics, play an important role in the origin of stellar masses because of the sensitivity of filament fragmentation to temperature variations. Simulations of the formation of star clusters show that the most massive stars form by continuing accretion in the dense cluster cores, and this again is a runaway process that couples star formation and cluster formation. Star-forming clouds also tend to develop hierarchical structures, and smaller groups of forming objects tend to merge into progressively larger ones, a generic feature of self-gravitating systems that is common to star formation and galaxy formation. Because of the large range of scales and the complex dynamics involved, analytic models cannot adequately describe many aspects of star formation, and detailed numerical simulations are needed to advance our understanding of the subject. 'The purpose of computing is insight, not numbers.' Richard W Hamming, in Numerical Methods for Scientists and Engineers (1962) 'There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.' William Shakespeare, in Hamlet, Prince of Denmark (1604)

  9. 45 CFR 164.524 - Access of individuals to protected health information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... individual with access to the protected health information in the form or format requested by the individual, if it is readily producible in such form or format; or, if not, in a readable hard copy form or such other form or format as agreed to by the covered entity and the individual. (ii) The covered entity may...

  10. Pristionchus pacificus daf-16 is essential for dauer formation but dispensable for mouth form dimorphism.

    PubMed

    Ogawa, Akira; Bento, Gilberto; Bartelmes, Gabi; Dieterich, Christoph; Sommer, Ralf J

    2011-04-01

    The nematode Pristionchus pacificus shows two forms of phenotypic plasticity: dauer formation and dimorphism of mouth form morphologies. It can therefore serve as a model for studying the evolutionary mechanisms that underlie phenotypic plasticity. Formation of dauer larvae is observed in many other species and constitutes one of the most crucial survival strategies in nematodes, whereas the mouth form dimorphism is an evolutionary novelty observed only in P. pacificus and related nematodes. We have previously shown that the same environmental cues and steroid signaling control both dauer formation and mouth form dimorphism. Here, we examine by mutational analysis and whole-genome sequencing the function of P. pacificus (Ppa) daf-16, which encodes a forkhead transcription factor; in C. elegans, daf-16 is the target of insulin signaling and plays important roles in dauer formation. We found that mutations in Ppa-daf-16 cause strong dauer formation-defective phenotypes, suggesting that Ppa-daf-16 represents one of the evolutionarily conserved regulators of dauer formation. Upon strong dauer induction with lophenol, Ppa-daf-16 individuals formed arrested larvae that partially resemble wild-type dauer larvae, indicating that Ppa-daf-16 is also required for dauer morphogenesis. By contrast, regulation of mouth form dimorphism was unaffected by Ppa-daf-16 mutations and mutant animals responded normally to environmental cues. Our results suggest that mechanisms for dauer formation and mouth form regulation overlap partially, but not completely, and one of two key transcriptional regulators of the dauer regulatory network was either independently co-opted for, or subsequently lost by, the mouth form regulatory network.

  11. SAR Altimetry Processing on Demand Service for Cryosat-2 and Sentinel-3 at ESA G-Pod

    NASA Astrophysics Data System (ADS)

    Dinardo, Salvatore; Benveniste, Jérôme; Ambrózio, Américo; Restano, Marco

    2016-07-01

    The G-POD SARvatore service to users for the exploitation of CryoSat-2 data was designed and developed by the Altimetry Team at ESA-ESRIN EOP-SER (Earth Observation - Exploitation, Research and Development). The G-POD service coined SARvatore (SAR Versatile Altimetric Toolkit for Ocean Research & Exploitation) is a web platform that allows any scientist to process on-line, on-demand and with user-selectable configuration CryoSat-2 SAR/SARIN data, from L1a (FBR) data products up to SAR/SARin Level-2 geophysical data products. The Processor takes advantage of the G-POD (Grid Processing On Demand) distributed computing platform (350 CPUs in ~70 Working Nodes) to timely deliver output data products and to interface with ESA-ESRIN FBR data archive (155'000 SAR passes and 41'000 SARin passes). The output data products are generated in standard NetCDF format (using CF Convention), therefore being compatible with the Multi-Mission Radar Altimetry Toolbox (BRAT) and other NetCDF tools. By using the G-POD graphical interface, it is straightforward to select a geographical area of interest within the time-frame related to the Cryosat-2 SAR/SARin FBR data products availability in the service catalogue. The processor prototype is versatile, allowing users to customize and to adapt the processing according to their specific requirements by setting a list of configurable options. After the task submission, users can follow, in real time, the status of the processing, which can be lengthy due to the required intense number-crunching inherent to SAR processing. From the web interface, users can choose to generate experimental SAR data products as stack data and RIP (Range Integrated Power) waveforms. The processing service, initially developed to support the awarded development contracts by confronting the deliverables to ESA's prototype, is now made available to the worldwide SAR Altimetry Community for research & development experiments, for on-site demonstrations/training in training courses and workshops, for cross-comparison to third party products (e.g. CLS/CNES CPP or ESA SAR COP data products), for the preparation of the Sentinel-3 Surface Topography Mission, for producing data and graphics for publications, etc. Initially, the processing was designed and uniquely optimized for open ocean studies. It was based on the SAMOSA model developed for the Sentinel-3 Ground Segment using CryoSat data (Cotton et al., 2008; Ray et al., 2014). However, since June 2015, a new retracker (SAMOSA+) is offered within the service as a dedicated retracker for coastal zone, inland water and sea-ice/ice-sheet. In view of the Sentinel-3 launch, a new flavor of the service will be initiated, exclusively dedicated to the processing of Sentinel-3 mission data products. The scope of this new service will be to maximize the exploitation of the upcoming Sentinel-3 Surface Topography Mission's data over all surfaces. The service is open, free of charge (supported by the ESA SEOM Programme Element) for worldwide scientific applications and available at https://gpod.eo.esa.int/services/CRYOSAT_SAR/

  12. Method for forming an in situ oil shale retort with horizontal free faces

    DOEpatents

    Ricketts, Thomas E.; Fernandes, Robert J.

    1983-01-01

    A method for forming a fragmented permeable mass of formation particles in an in situ oil shale retort is provided. A horizontally extending void is excavated in unfragmented formation containing oil shale and a zone of unfragmented formation is left adjacent the void. An array of explosive charges is formed in the zone of unfragmented formation. The array of explosive charges comprises rows of central explosive charges surrounded by a band of outer explosive charges which are adjacent side boundaries of the retort being formed. The powder factor of each outer explosive charge is made about equal to the powder factor of each central explosive charge. The explosive charges are detonated for explosively expanding the zone of unfragmented formation toward the void for forming the fragmented permeable mass of formation particles having a reasonably uniformly distributed void fraction in the in situ oil shale retort.

  13. A Hybrid Interview Model for Medical School Interviews: Combining Traditional and Multisampling Formats.

    PubMed

    Bibler Zaidi, Nikki L; Santen, Sally A; Purkiss, Joel A; Teener, Carol A; Gay, Steven E

    2016-11-01

    Most medical schools have either retained a traditional admissions interview or fully adopted an innovative, multisampling format (e.g., the multiple mini-interview) despite there being advantages and disadvantages associated with each format. The University of Michigan Medical School (UMMS) sought to maximize the strengths associated with both interview formats after recognizing that combining the two approaches had the potential to capture additional, unique information about an applicant. In September 2014, the UMMS implemented a hybrid interview model with six, 6-minute short-form interviews-highly structured scenario-based encounters-and two, 30-minute semistructured long-form interviews. Five core skills were assessed across both interview formats. Overall, applicants and admissions committee members reported favorable reactions to the hybrid model, supporting continued use of the model. The generalizability coefficients for the six-station short-form and the two-interview long-form formats were estimated to be 0.470 and 0.176, respectively. Different skills were more reliably assessed by different interview formats. Scores from each format seemed to be operating independently as evidenced through moderate to low correlations (r = 0.100-0.403) for the same skills measured across different interview formats; however, after correcting for attenuation, these correlations were much higher. This hybrid model will be revised and optimized to capture the skills most reliably assessed by each format. Future analysis will examine validity by determining whether short-form and long-form interview scores accurately measure the skills intended to be assessed. Additionally, data collected from both formats will be used to establish baselines for entering students' competencies.

  14. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    NASA Astrophysics Data System (ADS)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  15. Efficiently Serving HDF5 Products via OPeNDAP

    NASA Technical Reports Server (NTRS)

    Yang, Kent

    2017-01-01

    Hyrax OPeNDAP services are widely used by the Earth Science data centers in NASA, NOAA and other organizations to serve end users. In this talk, we will present some key features added in the HDF5 Hyrax OPeNDAP handler that can help data centers to better serve the HDF5netCDF-4 data products. Among these new features, we will focus on the following:1.The DAP4 support 2.The memory cache and the disk cache support that can reduce the service access time 3.The enhancement that makes the swath-like HDF5 products visualized by CF-client tools. We will also discuss the role of the HDF5 handler in-depth in the recent study of the Hyrax service in the cloud environment.

  16. ICOADS: A Foundational Database with a new Release

    NASA Astrophysics Data System (ADS)

    Angel, W.; Freeman, E.; Woodruff, S. D.; Worley, S. J.; Brohan, P.; Dumenil-Gates, L.; Kent, E. C.; Smith, S. R.

    2016-02-01

    The International Comprehensive Ocean-Atmosphere Data Set (ICOADS) offers surface marine data spanning the past three centuries and is the world's largest collection of marine surface in situ observations with approximately 300 million unique records from 1662 to the present in a common International Maritime Meteorological Archive (IMMA) format. Simple gridded monthly summary products (including netCDF) for 2° latitude x 2° longitude boxes back to 1800 and 1° x 1° boxes since 1960 are computed for each month. ICOADS observations made available in the IMMA format are taken primarily from ships (merchant, ocean research, fishing, navy, etc.) and moored and drifting buoys. Each report contains individual observations of meteorological and oceanographic variables, such as sea surface and air temperatures, winds, pressure, humidity, wet bulb, dew point, ocean waves and cloudiness. A monthly summary for an area box includes ten statistics (e.g. mean, median, standard deviation, etc.) for 22 observed and computed variables (e.g. sea surface and air temperature, wind, pressure, humidity, cloudiness, etc.). ICOADS is the most complete and heterogeneous collection of surface marine data in existence. A major new historical update, Release 3.0 (R3.0), now in production (with availability anticipated in mid-2016) will contain a variety of important updates. These updates will include unique IDs (UIDs), new IMMA attachments, ICOADS Value-Added Database (IVAD), and numerous new or improved historical and contemporary data sources. UIDs are assigned to each individual marine report, which will greatly facilitate interaction between users and data developers, and affords record traceability. A new Near-Surface Oceanographic (Nocn) attachment has been developed to include oceanographic profile elements, such as sea surface salinity, sea surface temperatures, and their associated measurement depths. Additionally, IVAD allows a feedback mechanism of data adjustments which can be stored within each IMMA report. R3.0 includes near-surface ocean profile measurements from sources such as the World Ocean Database (WOD), Shipboard Automated Meteorological and Oceanographic System (SAMOS), as well as many others. An in-depth look at the improvements and the data inputs planned for R3.0 will be further discussed.

  17. Open Data, Jupyter Notebooks and Geospatial Data Standards Combined - Opening up large volumes of marine and climate data to other communities

    NASA Astrophysics Data System (ADS)

    Clements, O.; Siemen, S.; Wagemann, J.

    2017-12-01

    The EU-funded Earthserver-2 project aims to offer on-demand access to large volumes of environmental data (Earth Observation, Marine, Climate data and Planetary data) via the interface standard Web Coverage Service defined by the Open Geospatial Consortium. Providing access to data via OGC web services (e.g. WCS and WMS) has the potential to open up services to a wider audience, especially to users outside the respective communities. Especially WCS 2.0 with its processing extension Web Coverage Processing Service (WCPS) is highly beneficial to make large volumes accessible to non-expert communities. Users do not have to deal with custom community data formats, such as GRIB for the meteorological community, but can directly access the data in a format they are more familiar with, such as NetCDF, JSON or CSV. Data requests can further directly be integrated into custom processing routines and users are not required to download Gigabytes of data anymore. WCS supports trim (reduction of data extent) and slice (reduction of data dimension) operations on multi-dimensional data, providing users a very flexible on-demand access to the data. WCPS allows the user to craft queries to run on the data using a text-based query language, similar to SQL. These queries can be very powerful, e.g. condensing a three-dimensional data cube into its two-dimensional mean. However, the more processing-intensive the more complex the query. As part of the EarthServer-2 project, we developed a python library that helps users to generate complex WCPS queries with Python, a programming language they are more familiar with. The interactive presentation aims to give practical examples how users can benefit from two specific WCS services from the Marine and Climate community. Use-cases from the two communities will show different approaches to take advantage of a Web Coverage (Processing) Service. The entire content is available with Jupyter Notebooks, as they prove to be a highly beneficial tool to generate reproducible workflows for environmental data analysis.

  18. ESA BRAT (Broadview Radar Altimetry Toolbox) and GUT (GOCE User Toolbox) toolboxes

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Ambrozio, A.; Restano, M.

    2016-12-01

    The Broadview Radar Altimetry Toolbox (BRAT) is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including the upcoming Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's future release (4.0.0) is planned for September 2016. Based on the community feedback, the frontend has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. The GOCE User Toolbox (GUT) is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.0 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's VCM (Variance-Covariance Matrix) tool for analysing GOCE's variance-covariance matrices. BRAT and GUT toolboxes can be freely downloaded, along with ancillary material, at https://earth.esa.int/brat and https://earth.esa.int/gut.

  19. The BRAT and GUT Couple: Broadview Radar Altimetry and GOCE User Toolboxes

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Restano, M.; Ambrózio, A.

    2017-12-01

    The Broadview Radar Altimetry Toolbox (BRAT) is a collection of tools designed to facilitate the processing of radar altimetry data from previous and current altimetry missions, including Sentinel-3A L1 and L2 products. A tutorial is included providing plenty of use cases. BRAT's next release (4.2.0) is planned for October 2017. Based on the community feedback, the front-end has been further improved and simplified whereas the capability to use BRAT in conjunction with MATLAB/IDL or C/C++/Python/Fortran, allowing users to obtain desired data bypassing the data-formatting hassle, remains unchanged. Several kinds of computations can be done within BRAT involving the combination of data fields, that can be saved for future uses, either by using embedded formulas including those from oceanographic altimetry, or by implementing ad-hoc Python modules created by users to meet their needs. BRAT can also be used to quickly visualise data, or to translate data into other formats, e.g. from NetCDF to raster images. The GOCE User Toolbox (GUT) is a compilation of tools for the use and the analysis of GOCE gravity field models. It facilitates using, viewing and post-processing GOCE L2 data and allows gravity field data, in conjunction and consistently with any other auxiliary data set, to be pre-processed by beginners in gravity field processing, for oceanographic and hydrologic as well as for solid earth applications at both regional and global scales. Hence, GUT facilitates the extensive use of data acquired during GRACE and GOCE missions. In the current 3.1 version, GUT has been outfitted with a graphical user interface allowing users to visually program data processing workflows. Further enhancements aiming at facilitating the use of gradients, the anisotropic diffusive filtering, and the computation of Bouguer and isostatic gravity anomalies have been introduced. Packaged with GUT is also GUT's Variance-Covariance Matrix tool (VCM). BRAT and GUT toolboxes can be freely downloaded, along with ancillary material, at https://earth.esa.int/brat and https://earth.esa.int/gut.

  20. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for

  1. ArcGIS Framework for Scientific Data Analysis and Serving

    NASA Astrophysics Data System (ADS)

    Xu, H.; Ju, W.; Zhang, J.

    2015-12-01

    ArcGIS is a platform for managing, visualizing, analyzing, and serving geospatial data. Scientific data as part of the geospatial data features multiple dimensions (X, Y, time, and depth) and large volume. Multidimensional mosaic dataset (MDMD), a newly enhanced data model in ArcGIS, models the multidimensional gridded data (e.g. raster or image) as a hypercube and enables ArcGIS's capabilities to handle the large volume and near-real time scientific data. Built on top of geodatabase, the MDMD stores the dimension values and the variables (2D arrays) in a geodatabase table which allows accessing a slice or slices of the hypercube through a simple query and supports animating changes along time or vertical dimension using ArcGIS desktop or web clients. Through raster types, MDMD can manage not only netCDF, GRIB, and HDF formats but also many other formats or satellite data. It is scalable and can handle large data volume. The parallel geo-processing engine makes the data ingestion fast and easily. Raster function, definition of a raster processing algorithm, is a very important component in ArcGIS platform for on-demand raster processing and analysis. The scientific data analytics is achieved through the MDMD and raster function templates which perform on-demand scientific computation with variables ingested in the MDMD. For example, aggregating monthly average from daily data; computing total rainfall of a year; calculating heat index for forecasting data, and identifying fishing habitat zones etc. Addtionally, MDMD with the associated raster function templates can be served through ArcGIS server as image services which provide a framework for on-demand server side computation and analysis, and the published services can be accessed by multiple clients such as ArcMap, ArcGIS Online, JavaScript, REST, WCS, and WMS. This presentation will focus on the MDMD model and raster processing templates. In addtion, MODIS land cover, NDFD weather service, and HYCOM ocean model will be used to illustrate how ArcGIS platform and MDMD model can facilitate scientific data visualization and analytics and how the analysis results can be shared to more audience through ArcGIS Online and Portal.

  2. Star Formation in Irregular Galaxies.

    ERIC Educational Resources Information Center

    Hunter, Deidre; Wolff, Sidney

    1985-01-01

    Examines mechanisms of how stars are formed in irregular galaxies. Formation in giant irregular galaxies, formation in dwarf irregular galaxies, and comparisons with larger star-forming regions found in spiral galaxies are considered separately. (JN)

  3. 21 CFR 20.33 - Form or format of response.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Form or format of response. 20.33 Section 20.33 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PUBLIC INFORMATION General Policy § 20.33 Form or format of response. (a) The Food and Drug Administration shall make...

  4. The formation of direct collapse black holes under the influence of streaming velocities

    NASA Astrophysics Data System (ADS)

    Schauer, Anna T. P.; Regan, John; Glover, Simon C. O.; Klessen, Ralf S.

    2017-11-01

    We study the influence of a high baryonic streaming velocity on the formation of direct collapse black holes (DCBHs) with the help of cosmological simulations carried out using the moving mesh code arepo. We show that a streaming velocity that is as large as three times the root-mean-squared value is effective at suppressing the formation of H2-cooled minihaloes, while still allowing larger atomic cooling haloes (ACHs) to form. We find that enough H2 forms in the centre of these ACHs to effectively cool the gas, demonstrating that a high streaming velocity by itself cannot produce the conditions required for DCBH formation. However, we argue that high streaming velocity regions do provide an ideal environment for the formation of DCBHs in close pairs of ACHs (the `synchronized halo' model). Due to the absence of star formation in minihaloes, the gas remains chemically pristine until the ACHs form. If two such haloes form with only a small separation in time and space, then the one forming stars earlier can provide enough ultraviolet radiation to suppress H2 cooling in the other, allowing it to collapse to form a DCBH. Baryonic streaming may therefore play a crucial role in the formation of the seeds of the highest redshift quasars.

  5. Review of access, licenses and understandability of open datasets used in hydrology research

    NASA Astrophysics Data System (ADS)

    Falkenroth, Esa; Arheimer, Berit; Lagerbäck Adolphi, Emma

    2015-04-01

    The amount of open data available for hydrology research is continually growing. In the EU-funded project SWITCH-ON (Sharing Water-related Information to Tackle Changes in the Hydrosphere - for Operational Needs), we are addressing water concerns by exploring and exploiting the untapped potential of these new open data. This work is enabled by many ongoing efforts to facilitate the use of open data. For instance, a number of portals (such as the GEOSS Portal and the INSPIRE community geoportal) provide the means to search for such open data sets and open spatial data services. However, in general, the systematic use of available open data is still fairly uncommon in hydrology research. Factors that limits (re)usability of a data set include: (1) accessibility, (2) understandability and (3) licences. If you cannot access the data set, you cannot use if for research. If you cannot understand the data set you cannot use it for research. Finally, if you are not permitted to use the data, you cannot use it for research. Early on in the project, we sent out a questionnaire to our research partners (SMHI, Universita di Bologna, University of Bristol, Technische Universiteit Delft and Technische Universitaet Wien) to find out what data sets they were planning to use in their experiments. The result was a comprehensive list of useful open data sets. Later, this list of data sets was extended with additional information on data sets for planned commercial water-information products and services. With the list of 50 common data sets as a starting point, we reviewed issues related to access, understandability and licence conditions. Regarding access to data sets, a majority of data sets were available through direct internet download via some well-known transfer protocol such as ftp or http. However, several data sets were found to be inaccessible due to server downtime, incorrect links or problems with the host database management system. One possible explanation for this could be that many data sets have been assembled by research project that no longer are funded. Hence, their server infrastructure would be less maintained compared to large-scale operational services. Regarding understandability of the data sets, the issues encountered were mainly due to incomplete documentation or metadata and problems with decoding binary formats. Ideally, open data sets should be represented in well-known formats and they should be accompanied with sufficient documentation so the data set can be understood. Furthermore, machine-readable format would be preferrable. Here, the development efforts on Water ML and NETCDF and other standards should improve understandability of data sets over time but in this review, only a few data sets were provided in these wellknown formats. Instead, the majority of datasets were stored in various text-based or binary formats or even document-oriented formats such as PDF. For some binary formats, we could not find information on what software was necessary to decipher the files. Other domains such as meteorology have long-standing traditions of operational data exchange format whereas hydrology research is still quite fragmented and the data exchange is usually done on a case-by-case basis. With the increased sharing of open data there is a good chance the situation will improve for data sets used in hydrology research. Finally, regarding licensce issue, a high number of data sets did not have a clear statement on terms of use and limitation for access. In most cases the provider could be contacted regarding licensing issues.

  6. Pelagic habitat visualization: the need for a third (and fourth) dimension: HabitatSpace

    USGS Publications Warehouse

    Beegle-Krause, C; Vance, Tiffany; Reusser, Debbie; Stuebe, David; Howlett, Eoin

    2009-01-01

    Habitat in open water is not simply a 2-D to 2.5-D surface such as the ocean bottom or the air-water interface. Rather, pelagic habitat is a 3-D volume of water that can change over time, leading us to the term habitat space. Visualization and analysis in 2-D is well supported with GIS tools, but a new tool was needed for visualization and analysis in four dimensions. Observational data (cruise profiles (xo, yo, z, to)), numerical circulation model fields (x,y,z,t), and trajectories (larval fish, 4-D line) need to be merged together in a meaningful way for visualization and analysis. As a first step toward this new framework, UNIDATA’s Integrated Data Viewer (IDV) has been used to create a set of tools for habitat analysis in 4-D. IDV was designed for 3-D+time geospatial data in the meteorological community. NetCDF JavaTM libraries allow the tool to read many file formats including remotely located data (e.g. data available via OPeNDAP ). With this project, IDV has been adapted for use in delineating habitat space for multiple fish species in the ocean. The ability to define and visualize boundaries of a water mass, which meets specific biologically relevant criteria (e.g., volume, connectedness, and inter-annual variability) based on model results and observational data, will allow managers to investigate the survival of individual year classes of commercially important fisheries. Better understanding of the survival of these year classes will lead to improved forecasting of fisheries recruitment.

  7. The IAGOS information system

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Schultz, Martin; Brötz, Björn; Rauthe-Schöch, Armin; Thouret, Valérie

    2015-04-01

    IAGOS (In-service Aircraft for a Global Observing System) aims at the provision of long-term, frequent, regular, accurate, and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. Data access is handled by open access policy based on the submission of research requests which are reviewed by the PIs. The IAGOS database (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data centre Ether (CNES and CNRS). In the framework of the IGAS project (IAGOS for Copernicus Atmospheric Service) interoperability with international portals or other databases is implemented in order to improve IAGOS data discovery. The IGAS data network is composed of three data centres: the IAGOS database in Toulouse including IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data since January 2015; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the MACC data centre in Jülich (http://join.iek.fz-juelich.de). The MACC (Monitoring Atmospheric Composition and Climate) project is a prominent user of the IGAS data network. In June 2015 a new version of the IAGOS database will be released providing improved services such as download in NetCDF or NASA Ames formats; graphical tools (maps, scatter plots, etc.); standardized metadata (ISO 19115) and a better users management. The link with the MACC data centre, through JOIN (Jülich OWS Interface), will allow to combine model outputs with IAGOS data for intercomparison. The interoperability within the IGAS data network, implemented thanks to many web services, will improve the functionalities of the web interfaces of each data centre.

  8. New Solutions for Enabling Discovery of User-Centric Virtual Data Products in NASA's Common Metadata Repository

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Gilman, J.; Baynes, K.; Shum, D.

    2015-12-01

    This talk introduces a new NASA Earth Observing System Data and Information System (EOSDIS) capability to automatically generate and maintain derived, Virtual Product information allowing DAACs and Data Providers to create tailored and more discoverable variations of their products. After this talk the audience will be aware of the new EOSDIS Virtual Product capability, applications of it, and how to take advantage of it. Much of the data made available in the EOSDIS are organized for generation and archival rather than for discovery and use. The EOSDIS Common Metadata Repository (CMR) is launching a new capability providing automated generation and maintenance of user-oriented Virtual Product information. DAACs can easily surface variations on established data products tailored to specific uses cases and users, leveraging DAAC exposed services such as custom ordering or access services like OPeNDAP for on-demand product generation and distribution. Virtual Data Products enjoy support for spatial and temporal information, keyword discovery, association with imagery, and are fully discoverable by tools such as NASA Earthdata Search, Worldview, and Reverb. Virtual Product generation has applicability across many use cases: - Describing derived products such as Surface Kinetic Temperature information (AST_08) from source products (ASTER L1A) - Providing streamlined access to data products (e.g. AIRS) containing many (>800) data variables covering an enormous variety of physical measurements - Attaching additional EOSDIS offerings such as Visual Metadata, external services, and documentation metadata - Publishing alternate formats for a product (e.g. netCDF for HDF products) with the actual conversion happening on request - Publishing granules to be modified by on-the-fly services, like GES-DISC's Data Quality Screening Service - Publishing "bundled" products where granules from one product correspond to granules from one or more other related products

  9. EARLINET: potential operationality of a research network

    NASA Astrophysics Data System (ADS)

    Sicard, M.; D'Amico, G.; Comerón, A.; Mona, L.; Alados-Arboledas, L.; Amodeo, A.; Baars, H.; Belegante, L.; Binietoglou, I.; Bravo-Aranda, J. A.; Fernández, A. J.; Fréville, P.; García-Vizcaíno, D.; Giunta, A.; Granados-Muñoz, M. J.; Guerrero-Rascado, J. L.; Hadjimitsis, D.; Haefele, A.; Hervo, M.; Iarlori, M.; Kokkalis, P.; Lange, D.; Mamouri, R. E.; Mattis, I.; Molero, F.; Montoux, N.; Muñoz, A.; Muñoz Porcar, C.; Navas-Guzmán, F.; Nicolae, D.; Nisantzi, A.; Papagiannopoulos, N.; Papayannis, A.; Pereira, S.; Preißler, J.; Pujadas, M.; Rizi, V.; Rocadenbosch, F.; Sellegri, K.; Simeonov, V.; Tsaknakis, G.; Wagner, F.; Pappalardo, G.

    2015-07-01

    In the framework of ACTRIS summer 2012 measurement campaign (8 June-17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated to the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time the Single-Calculus Chain (SCC), the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products, was used. All stations sent in real time measurements of 1 h of duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC while the optical processing was performed in near-real time after the exercise ended. 98 and 84 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on lidar data. The paper shows time series of continuous and homogeneously obtained products retrieved at different levels of the SCC: range-square corrected signals (pre-processing) and daytime backscatter and nighttime extinction coefficient profiles (optical processing), as well as combined plots of all direct and derived optical products. The derived products include backscatter- and extinction-related Ångström exponents, lidar ratios and color ratios. The combined plots reveal extremely valuable for aerosol classification. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modelling, climate research and calibration/validation activities of spaceborne observations.

  10. Promoting discovery and access to real time observations produced by regional coastal ocean observing systems

    NASA Astrophysics Data System (ADS)

    Anderson, D. M.; Snowden, D. P.; Bochenek, R.; Bickel, A.

    2015-12-01

    In the U.S. coastal waters, a network of eleven regional coastal ocean observing systems support real-time coastal and ocean observing. The platforms supported and variables acquired are diverse, ranging from current sensing high frequency (HF) radar to autonomous gliders. The system incorporates data produced by other networks and experimental systems, further increasing the breadth of the collection. Strategies promoted by the U.S. Integrated Ocean Observing System (IOOS) ensure these data are not lost at sea. Every data set deserves a description. ISO and FGDC compliant metadata enables catalog interoperability and record-sharing. Extensive use of netCDF with the Climate and Forecast convention (identifying both metadata and a structured format) is shown to be a powerful strategy to promote discovery, interoperability, and re-use of the data. To integrate specialized data which are often obscure, quality control protocols are being developed to homogenize the QC and make these data more integrate-able. Data Assembly Centers have been established to integrate some specialized streams including gliders, animal telemetry, and HF radar. Subsets of data that are ingested into the National Data Buoy Center are also routed to the Global Telecommunications System (GTS) of the World Meteorological Organization to assure wide international distribution. From the GTS, data are assimilated into now-cast and forecast models, fed to other observing systems, and used to support observation-based decision making such as forecasts, warnings, and alerts. For a few years apps were a popular way to deliver these real-time data streams to phones and tablets. Responsive and adaptive web sites are an emerging flexible strategy to provide access to the regional coastal ocean observations.

  11. Sea ice in the Baltic Sea - revisiting BASIS ice, a~historical data set covering the period 1960/1961-1978/1979

    NASA Astrophysics Data System (ADS)

    Löptien, U.; Dietze, H.

    2014-06-01

    The Baltic Sea is a seasonally ice-covered, marginal sea, situated in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised 1981 in a joint project of the Finnish Institute of Marine Research (today Finish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website www.baltic-ocean.org hosts the post-prossed data and the conversion code. The data are also archived at the Data Publisher for Earth & Environmental Science PANGEA (doi:10.1594/PANGEA.832353).

  12. Data Container Study for Handling array-based data using Hive, Spark, MongoDB, SciDB and Rasdaman

    NASA Astrophysics Data System (ADS)

    Xu, M.; Hu, F.; Yang, J.; Yu, M.; Yang, C. P.

    2017-12-01

    Geoscience communities have come up with various big data storage solutions, such as Rasdaman and Hive, to address the grand challenges for massive Earth observation data management and processing. To examine the readiness of current solutions in supporting big Earth observation, we propose to investigate and compare four popular data container solutions, including Rasdaman, Hive, Spark, SciDB and MongoDB. Using different types of spatial and non-spatial queries, datasets stored in common scientific data formats (e.g., NetCDF and HDF), and two applications (i.e. dust storm simulation data mining and MERRA data analytics), we systematically compare and evaluate the feature and performance of these four data containers in terms of data discover and access. The computing resources (e.g. CPU, memory, hard drive, network) consumed while performing various queries and operations are monitored and recorded for the performance evaluation. The initial results show that 1) the popular data container clusters are able to handle large volume of data, but their performances vary in different situations. Meanwhile, there is a trade-off between data preprocessing, disk saving, query-time saving, and resource consuming. 2) ClimateSpark, MongoDB and SciDB perform the best among all the containers in all the queries tests, and Hive performs the worst. 3) These studied data containers can be applied on other array-based datasets, such as high resolution remote sensing data and model simulation data. 4) Rasdaman clustering configuration is more complex than the others. A comprehensive report will detail the experimental results, and compare their pros and cons regarding system performance, ease of use, accessibility, scalability, compatibility, and flexibility.

  13. National Climate Assessment - Land Data Assimilation System (NCA-LDAS) Data and Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Rui, Hualan; Vollmer, Bruce; Teng, Bill; Jasinski, Michael; Mocko, David; Loeser, Carlee; Kempler, Steven

    2016-01-01

    The National Climate Assessment-Land Data Assimilation System (NCA-LDAS) is an Integrated Terrestrial Water Analysis, and is one of NASAs contributions to the NCA of the United States. The NCA-LDAS has undergone extensive development, including multi-variate assimilation of remotely-sensed water states and anomalies as well as evaluation and verification studies, led by the Goddard Space Flight Centers Hydrological Sciences Laboratory (HSL). The resulting NCA-LDAS data have recently been released to the general public and include those from the Noah land-surface model (LSM) version 3.3 (Noah-3.3) and the Catchment LSM version Fortuna-2.5 (CLSM-F2.5). Standard LSM output variables including soil moistures temperatures, surface fluxes, snow cover depth, groundwater, and runoff are provided, as well as streamflow using a river routing system. The NCA-LDAS data are archived at and distributed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). The data can be accessed via HTTP, OPeNDAP, Mirador search and download, and NASA Earth data Search. To further facilitate access and use, the NCA-LDAS data are integrated into the NASA Giovanni, for quick visualization and analysis, and into the Data Rods system, for retrieval of time series of long time periods. The temporal and spatial resolutions of the NCA-LDAS data are, respectively, daily-averages and 0.125x0.125 degree, covering North America (25N 53N; 125W 67W) and the period January 1979 to December 2015. The data files are in self-describing, machine-independent, CF-compliant netCDF-4 format.

  14. ClimateSpark: An In-memory Distributed Computing Framework for Big Climate Data Analytics

    NASA Astrophysics Data System (ADS)

    Hu, F.; Yang, C. P.; Duffy, D.; Schnase, J. L.; Li, Z.

    2016-12-01

    Massive array-based climate data is being generated from global surveillance systems and model simulations. They are widely used to analyze the environment problems, such as climate changes, natural hazards, and public health. However, knowing the underlying information from these big climate datasets is challenging due to both data- and computing- intensive issues in data processing and analyzing. To tackle the challenges, this paper proposes ClimateSpark, an in-memory distributed computing framework to support big climate data processing. In ClimateSpark, the spatiotemporal index is developed to enable Apache Spark to treat the array-based climate data (e.g. netCDF4, HDF4) as native formats, which are stored in Hadoop Distributed File System (HDFS) without any preprocessing. Based on the index, the spatiotemporal query services are provided to retrieve dataset according to a defined geospatial and temporal bounding box. The data subsets will be read out, and a data partition strategy will be applied to equally split the queried data to each computing node, and store them in memory as climateRDDs for processing. By leveraging Spark SQL and User Defined Function (UDFs), the climate data analysis operations can be conducted by the intuitive SQL language. ClimateSpark is evaluated by two use cases using the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. One use case is to conduct the spatiotemporal query and visualize the subset results in animation; the other one is to compare different climate model outputs using Taylor-diagram service. Experimental results show that ClimateSpark can significantly accelerate data query and processing, and enable the complex analysis services served in the SQL-style fashion.

  15. The Earth Data Analytic Services (EDAS) Framework

    NASA Astrophysics Data System (ADS)

    Maxwell, T. P.; Duffy, D.

    2017-12-01

    Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.

  16. The new IAGOS Database Portal

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain

    2016-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.

  17. Interpretation of medical imaging data with a mobile application: a mobile digital imaging processing environment.

    PubMed

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J; Ullmann, Jeremy F P; Janke, Andrew L

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users' expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services.

  18. Interpretation of Medical Imaging Data with a Mobile Application: A Mobile Digital Imaging Processing Environment

    PubMed Central

    Lin, Meng Kuan; Nicolini, Oliver; Waxenegger, Harald; Galloway, Graham J.; Ullmann, Jeremy F. P.; Janke, Andrew L.

    2013-01-01

    Digital Imaging Processing (DIP) requires data extraction and output from a visualization tool to be consistent. Data handling and transmission between the server and a user is a systematic process in service interpretation. The use of integrated medical services for management and viewing of imaging data in combination with a mobile visualization tool can be greatly facilitated by data analysis and interpretation. This paper presents an integrated mobile application and DIP service, called M-DIP. The objective of the system is to (1) automate the direct data tiling, conversion, pre-tiling of brain images from Medical Imaging NetCDF (MINC), Neuroimaging Informatics Technology Initiative (NIFTI) to RAW formats; (2) speed up querying of imaging measurement; and (3) display high-level of images with three dimensions in real world coordinates. In addition, M-DIP provides the ability to work on a mobile or tablet device without any software installation using web-based protocols. M-DIP implements three levels of architecture with a relational middle-layer database, a stand-alone DIP server, and a mobile application logic middle level realizing user interpretation for direct querying and communication. This imaging software has the ability to display biological imaging data at multiple zoom levels and to increase its quality to meet users’ expectations. Interpretation of bioimaging data is facilitated by an interface analogous to online mapping services using real world coordinate browsing. This allows mobile devices to display multiple datasets simultaneously from a remote site. M-DIP can be used as a measurement repository that can be accessed by any network environment, such as a portable mobile or tablet device. In addition, this system and combination with mobile applications are establishing a virtualization tool in the neuroinformatics field to speed interpretation services. PMID:23847587

  19. Data Standardization for Carbon Cycle Modeling: Lessons Learned

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Liu, S.; Cook, R. B.; Post, W. M.; Huntzinger, D. N.; Schwalm, C.; Schaefer, K. M.; Jacobson, A. R.; Michalak, A. M.

    2012-12-01

    Terrestrial biogeochemistry modeling is a crucial component of carbon cycle research and provides unique capabilities to understand terrestrial ecosystems. The Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP) aims to identify key differences in model formulation that drive observed differences in model predictions of biospheric carbon exchange. To do so, the MsTMIP framework provides standardized prescribed environmental driver data and a standard model protocol to facilitate comparisons of modeling results from nearly 30 teams. Model performance is then evaluated against a variety of carbon-cycle related observations (remote sensing, atmospheric, and flux tower-based observations) using quantitative performance measures and metrics in an integrated evaluation framework. As part of this effort, we have harmonized highly diverse and heterogeneous environmental driver data, model outputs, and observational benchmark data sets to facilitate use and analysis by the MsTMIP team. In this presentation, we will describe the lessons learned from this data-intensive carbon cycle research. The data harmonization activity itself can be made more efficient with the consideration of proper tools, version control, workflow management, and collaboration within the whole team. The adoption of on-demand and interoperable protocols (e.g. OPeNDAP and Open Geospatial Consortium) makes data visualization and distribution more flexible. Users can customize and download data in specific spatial extent, temporal period, and different resolutions. The effort to properly organize data in an open and standard format (e.g. Climate & Forecast compatible netCDF) allows the data to be analysed by a dispersed set of researchers more efficiently, and maximizes the longevity and utilization of the data. The lessons learned from this specific experience can benefit efforts by the broader community to leverage diverse data resources more efficiently in scientific research.

  20. Application of polar orbiter products in weather forecasting using open source tools and open standards

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; de Vreede, Ernst

    2015-04-01

    EUMETSAT disseminates data for a number of polar satellites. At KNMI these data are not fully used for operational weather forecasting mainly because of the irregular coverage and lack of tools for handling these different types of data and products. For weather forecasting there is a lot of interest in the application of products from these polar orbiters. One of the key aspects is the high-resolution of these products, which can complement the information provided by numerical weather forecasts. Another advantage over geostationary satellites is the high coverage at higher latitudes and lack of parallax. Products like the VIIRS day-night band offer many possibilities for this application. This presentation will describe a project that aims to make available a number of products from polar satellites to the forecasting operation. The goal of the project is to enable easy and timely access to polar orbiter products and enable combined presentations of satellite imagery with model data. The system will be able to generate RGB composites (“false colour images”) for operational use. The system will be built using open source components and open standards. Pytroll components are used for data handling, reprojection and derived product generation. For interactive presentation of imagery the browser based ADAGUC WMS viewer component is used. Image generation is done by ADAGUC server components, which provide OGC WMS services. Polar satellite products are stored as true color RGBA data in the NetCDF file format, the satellite swaths are stored as regular grids with their own custom geographical projection. The ADAGUC WMS system is able to reproject, render and combine these data in a webbrowser interactively. Results and lessons learned will be presented at the conference.

  1. Operable Data Management for Ocean Observing Systems

    NASA Astrophysics Data System (ADS)

    Chavez, F. P.; Graybeal, J. B.; Godin, M. A.

    2004-12-01

    As oceanographic observing systems become more numerous and complex, data management solutions must follow. Most existing oceanographic data management systems fall into one of three categories: they have been developed as dedicated solutions, with limited application to other observing systems; they expect that data will be pre-processed into well-defined formats, such as netCDF; or they are conceived as robust, generic data management solutions, with complexity (high) and maturity and adoption rates (low) to match. Each approach has strengths and weaknesses; no approach yet fully addresses, nor takes advantage of, the sophistication of ocean observing systems as they are now conceived. In this presentation we describe critical data management requirements for advanced ocean observing systems, of the type envisioned by ORION and IOOS. By defining common requirements -- functional, qualitative, and programmatic -- for all such ocean observing systems, the performance and nature of the general data management solution can be characterized. Issues such as scalability, maintaining metadata relationships, data access security, visualization, and operational flexibility suggest baseline architectural characteristics, which may in turn lead to reusable components and approaches. Interoperability with other data management systems, with standards-based solutions in metadata specification and data transport protocols, and with the data management infrastructure envisioned by IOOS and ORION, can also be used to define necessary capabilities. Finally, some requirements for the software infrastructure of ocean observing systems can be inferred. Early operational results and lessons learned, from development and operations of MBARI ocean observing systems, are used to illustrate key requirements, choices, and challenges. Reference systems include the Monterey Ocean Observing System (MOOS), its component software systems (Software Infrastructure and Applications for MOOS, and the Shore Side Data System), and the Autonomous Ocean Sampling Network (AOSN).

  2. Influence of Nitrogen Source on NDMA Formation during Chlorination of Diuron

    PubMed Central

    Chen, Wei-Hsiang; Young, Thomas M.

    2009-01-01

    N-Nitrosodimethylamine (NDMA) is formed during chlorination of water containing the herbicide diuron (N′-(3,4-dichlorophenyl)-N, N-dimethylurea) but formation is greatly enhanced in the presence of ammonia (chloramination). Groundwater impacted by agricultural runoff may contain diuron and relatively high total nitrogen concentrations; this study examines the impact of the nitrogen form (ammonium, nitrite or nitrate) on NDMA formation during chlorination of such waters. NDMA formation during chlorination of diuron increased in the order nitrite < nitrate < ammonium for a given chlorine, nitrogen, and diuron dose. Formation of dichloramine seemed to fully explain enhanced NDMA formation in the presence of ammonium. Nitrate unexpectedly enhanced nitrosation of diuron derivatives to form NDMA compared to the cases of no added nitrogen or nitrite addition. Nitrite addition is less effective because it consumes more chlorine and produces intermediates that react rapidly with diuron and its aromatic byproducts. Differences between surface and groundwater in nitrogen forms and concentrations and disinfection approaches, suggest strategies to reduce NDMA formation should vary with drinking water source. PMID:19457535

  3. Influence of nitrogen source on NDMA formation during chlorination of diuron.

    PubMed

    Chen, Wei-Hsiang; Young, Thomas M

    2009-07-01

    N-Nitrosodimethylamine (NDMA) is formed during chlorination of water containing the herbicide diuron (N'-(3,4-dichlorophenyl)-N,N-dimethylurea) but formation is greatly enhanced in the presence of ammonia (chloramination). Groundwater impacted by agricultural runoff may contain diuron and relatively high total nitrogen concentrations; this study examines the impact of the nitrogen form (ammonium, nitrite or nitrate) on NDMA formation during chlorination of such waters. NDMA formation during chlorination of diuron increased in the order nitrite

  4. High performance geospatial and climate data visualization using GeoJS

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Beezley, J. D.

    2015-12-01

    GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring data and analysis regarding 1) the human trafficking domain, 2) New York City taxi drop-offs and pick-ups, and 3) the Ebola outbreak. GeoJS supports advanced visualization features such as picking and selecting, as well as clustering. It also supports 2D contour plots, vector plots, heat maps, and geospatial graphs.

  5. Improving Metadata Compliance for Earth Science Data Records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Chang, O.; Foster, D.

    2014-12-01

    One of the recurring challenges of creating earth science data records is to ensure a consistent level of metadata compliance at the granule level where important details of contents, provenance, producer, and data references are necessary to obtain a sufficient level of understanding. These details are important not just for individual data consumers but also for autonomous software systems. Two of the most popular metadata standards at the granule level are the Climate and Forecast (CF) Metadata Conventions and the Attribute Conventions for Dataset Discovery (ACDD). Many data producers have implemented one or both of these models including the Group for High Resolution Sea Surface Temperature (GHRSST) for their global SST products and the Ocean Biology Processing Group for NASA ocean color and SST products. While both the CF and ACDD models contain various level of metadata richness, the actual "required" attributes are quite small in number. Metadata at the granule level becomes much more useful when recommended or optional attributes are implemented that document spatial and temporal ranges, lineage and provenance, sources, keywords, and references etc. In this presentation we report on a new open source tool to check the compliance of netCDF and HDF5 granules to the CF and ACCD metadata models. The tool, written in Python, was originally implemented to support metadata compliance for netCDF records as part of the NOAA's Integrated Ocean Observing System. It outputs standardized scoring for metadata compliance for both CF and ACDD, produces an objective summary weight, and can be implemented for remote records via OPeNDAP calls. Originally a command-line tool, we have extended it to provide a user-friendly web interface. Reports on metadata testing are grouped in hierarchies that make it easier to track flaws and inconsistencies in the record. We have also extended it to support explicit metadata structures and semantic syntax for the GHRSST project that can be easily adapted to other satellite missions as well. Overall, we hope this tool will provide the community with a useful mechanism to improve metadata quality and consistency at the granule level by providing objective scoring and assessment, as well as encourage data producers to improve metadata quality and quantity.

  6. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of... these format requirements: (1) Computer Compatibility: IBM PC/XT/AT or Apple Macintosh; (2) Operating...

  7. The Integrated Ocean Observing System Data Assembly Center

    NASA Astrophysics Data System (ADS)

    Bouchard, R. H.; Henderson, D.; Burnett, W.; Hervey, R. V.; Crout, R.

    2008-05-01

    The Integrated Ocean Observing System (IOOS) is the U.S. contribution to the Global Ocean Observing System and the Global Earth Observing System of Systems (GEOSS). As the Integrated Ocean Observing System Data Assembly Center (IOOS DAC), the National Oceanic and Atmospheric Administration`s (NOAA) National Data Buoy Center (NDBC) collects data from ocean observing systems and performs quality control on the data. Once the IOOS DAC performs the quality control, it distributes them in real-time: (1) in World Meteorological Organization alphanumeric data formats via the Global Telecommunications System (GTS) that provides instant availability to national and international users (2) in text files via its website (http://www.ndbc.noaa.gov) that provide easy access and use, and (3) in netCDF format via its OPeNDAP/DODS Server (http://dods.ndbc.noaa.gov) that provides higher resolution data than available in WMO alphanumeric or text file formats. The IOOS DAC routinely checks and distributes data from about 200 NDBC stations that include meteorological and oceanographic observations from buoys and coastal stations, water-level estimations from tsunameters (DART), and climate monitoring from buoys (Tropical Atmosphere Ocean array (TAO)). The IOOS DAC operates continuously - 24 hours per day, 7 days per week. In addition to data from NDBC`s platforms, the IOOS DAC applies its scientific expertise and data management and communications capabilities to facilitate partnerships for the exchange and application of data and to coordinate and leverage regional assets and resources from about 350 IOOS Partner stations. The IOOS DAC through its quality control process provides feedback to its partners on the quality of their observation that can lead to improved quality of the observations. The NDBC-IOOS Data Partnerships span the Western Hemisphere with data collection from the Beaufort Sea to the Peru Current, from the International Date Line to the central Atlantic Ocean, and include some 70 government organizations, non-government organizations, industry and academia. Data exchange is facilitated by the IOOS DAC`s capability to ingest some sensor native formats and its own eXtensible Mark-up Language (XML). The IOOS DAC handles a variety of observations among them atmospheric winds, pressure, and temperature, rainfall, directional waves, solar radiation, tides and water-levels, water-quality parameters, such as dissolved oxygen, turbidity, pH, chlorophyll, surface and subsurface currents, temperature, and salinity (conductivity) and from a diverse collection of observing platforms - moored and drifting buoys, coastal stations, oil and gas platforms, and HF Radar stations. The IOOS DAC efforts have resulted in making more than seven million in situ observations available in real- time to the global community during 2007.

  8. Structural and functional features of formate hydrogen lyase, an enzyme of mixed-acid fermentation from Escherichia coli.

    PubMed

    Bagramyan, K; Trchounian, A

    2003-11-01

    Formate hydrogen lyase from Escherichia coli is a membrane-bound complex that oxidizes formic acid to carbon dioxide and molecular hydrogen. Under anaerobic growth conditions and fermentation of sugars (glucose), it exists in two forms. One form is constituted by formate dehydrogenase H and hydrogenase 3, and the other one is the same formate dehydrogenase and hydrogenase 4; the presence of small protein subunits, carriers of electrons, is also probable. Other proteins may also be involved in formation of the enzyme complex, which requires the presence of metal (nickel-cobalt). Its formation also depends on the external pH and the presence of formate. Activity of both forms requires F(0)F(1)-ATPase; this explains dependence of the complex functioning on proton-motive force. It is also possible that the formate hydrogen lyase complex will exhibit its own proton-translocating function.

  9. Understanding the operational parameters affecting NDMA formation at Advanced Water Treatment Plants.

    PubMed

    Farré, Maria José; Döderer, Katrin; Hearn, Laurence; Poussade, Yvan; Keller, Jurg; Gernjak, Wolfgang

    2011-01-30

    N-nitrosodimethylamine (NDMA) can be formed when secondary effluents are disinfected by chloramines. By means of bench scale experiments this paper investigates operational parameters than can help Advanced Water Treatment Plants (AWTPs) to reduce the formation of NDMA during the production of high quality recycled water. The formation of NDMA was monitored during a contact time of 24h using dimethylamine as NDMA model precursor and secondary effluent from wastewater treatment plants. The three chloramine disinfection strategies tested were pre-formed and in-line formed monochloramine, and pre-formed dichloramine. Although the latter is not employed on purpose in full-scale applications, it has been suggested as the main contributing chemical generating NDMA during chloramination. After 24h, the NDMA formation decreased in both matrices tested in the order: pre-formed dichloramine>in-line formed monochloramine≫pre-formed monochloramine. The most important parameter to consider for the inhibition of NDMA formation was the length of contact time between disinfectant and wastewater. Formation of NDMA was initially inhibited for up to 6h with concentrations consistently <10 ng/L during these early stages of disinfection, regardless of the disinfection strategy. The reduction of the contact time was implemented in Bundamba AWTP (Queensland, Australia), where NDMA concentrations were reduced by a factor of 20 by optimizing the disinfection strategy. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Rapamycin-induced oligomer formation system of FRB-FKBP fusion proteins.

    PubMed

    Inobe, Tomonao; Nukina, Nobuyuki

    2016-07-01

    Most proteins form larger protein complexes and perform multiple functions in the cell. Thus, artificial regulation of protein complex formation controls the cellular functions that involve protein complexes. Although several artificial dimerization systems have already been used for numerous applications in biomedical research, cellular protein complexes form not only simple dimers but also larger oligomers. In this study, we showed that fusion proteins comprising the induced heterodimer formation proteins FRB and FKBP formed various oligomers upon addition of rapamycin. By adjusting the configuration of fusion proteins, we succeeded in generating an inducible tetramer formation system. Proteins of interest also formed tetramers by fusing to the inducible tetramer formation system, which exhibits its utility in a broad range of biological applications. Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  11. 8 CFR 299.4 - Reproduction of Public Use Forms by public and private entities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... font style as the body of the form. (2) The final form must match the design, format, and dimensions of... official form. The wording and punctuation of all data elements and identifying information must match exactly. No data elements may be added or deleted. The sequence and format for each item on the form must...

  12. 8 CFR 299.4 - Reproduction of Public Use Forms by public and private entities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... font style as the body of the form. (2) The final form must match the design, format, and dimensions of... official form. The wording and punctuation of all data elements and identifying information must match exactly. No data elements may be added or deleted. The sequence and format for each item on the form must...

  13. 8 CFR 299.4 - Reproduction of Public Use Forms by public and private entities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... font style as the body of the form. (2) The final form must match the design, format, and dimensions of... official form. The wording and punctuation of all data elements and identifying information must match exactly. No data elements may be added or deleted. The sequence and format for each item on the form must...

  14. 8 CFR 299.4 - Reproduction of Public Use Forms by public and private entities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... font style as the body of the form. (2) The final form must match the design, format, and dimensions of... official form. The wording and punctuation of all data elements and identifying information must match exactly. No data elements may be added or deleted. The sequence and format for each item on the form must...

  15. New Developments in the SCIAMACHY Level 2 Ground Processor Towards Version 7

    NASA Astrophysics Data System (ADS)

    Meringer, Markus; Noël, Stefan; Lichtenberg, Günter; Lerot, Christophe; Theys, Nicolas; Fehr, Thorsten; Dehn, Angelika; Liebing, Patricia; Gretschany, Sergei

    2016-07-01

    SCIAMACHY (SCanning Imaging Absorption spectroMeter for Atmospheric ChartographY) aboard ESA's environmental satellite ENVISAT observed the Earth's atmosphere in limb, nadir, and solar/lunar occultation geometries covering the UV-Visible to NIR spectral range. It is a joint project of Germany, the Netherlands and Belgium and was launched in February 2002. SCIAMACHY doubled its originally planned in-orbit lifetime of five years before the communication to ENVISAT was severed in April 2012, and the mission entered its post-operational phase. In order to preserve the best quality of the outstanding data recorded by SCIAMACHY, data processors are still being updated. This presentation will highlight three new developments that are currently being incorporated into the forthcoming version 7 of ESA's operational level 2 processor: 1. Tropospheric BrO, a new retrieval based on the scientific algorithm of (Theys et al., 2011). This algorithm had originally been developed for the GOME-2 sensor and was later adapted for SCIAMACHY. The main principle of the new algorithm is to split BrO total columns, which are already an operational product, into stratospheric VCD_{strat} and tropospheric VCD_{trop} fractions. BrO VCD_{strat} is determined from a climatological approach, driven by SCIAMACHY O_3 and NO_2 observations. Tropospheric vertical column densities are then determined as difference VCD_{trop}=VCD_{total}-VCD_{strat}. 2. Improved cloud flagging using limb measurements (Liebing, 2015). Limb cloud flags are already part of the SCIAMACHY L2 product. They are currently calculated employing the scientific algorithm developed by (Eichmann et al., 2015). Clouds are categorized into four types: water, ice, polar stratospheric and noctilucent clouds. High atmospheric aerosol loadings, however, often lead to spurious cloud flags, when aerosols had been misidentified as clouds. The new algorithm will better discriminate between aerosol and clouds. It will also have a higher sensitivity w.r.t. thin clouds. 3. A new, future-proof file format for the level 2 product based on NetCDF. The data format will be aligned and harmonized with other missions, particularly GOME and Sentinels. The final concept for the new format is still under discussion within the SCIAMACHY Quality Working Group. References: K.-U. Eichmann et al.: Global cloud top height retrieval using SCIAMACHY limb spectra: model studies and first results, Atmos. Meas. Tech. Discuss., 8, 8295-8352, 2015. P. Liebing: New Limb Cloud Detection Algorithm Theoretical Basis Document, 2016. N. Theys et al.: Global observations of tropospheric BrO columns using GOME-2 satellite data, Atmos. Chem. Phys., 11, 1791-1811, 2011.

  16. Providing Data Access for Interdisciplinary Research

    NASA Astrophysics Data System (ADS)

    Hooper, R. P.; Couch, A.

    2012-12-01

    Developing an interdisciplinary understanding of human and environmental interactions with water requires access to a variety of data kinds collected by various organizations. The CUAHSI Hydrologic Information System (HIS) is a standards-based, services-oriented architecture designed for time-series data. Such data represents an important type of data in water studies. Through the efforts of HIS, a standard transmission language, WaterML2, has been adopted by the Open Geospatial Consortium and is under consideration by the World Meteorologic Organization as an international standards. Web services have also been developed to retrieve data and metadata. HIS is completed with a metadata catalog, hosted by San Diego Supercomputing Center, which indexes more than 20 million time series provided from over 90 different services. This catalog is supported through a hierarchically organized controlled vocabulary that is open for community input and mediation. Data publishers include federal agencies, universities, state agencies, and non-profit organizations such as watershed associations. Accessing data from such a broad spectrum of sources through a uniform service standard promises to truly transform the way in which hydrologic research is done. CUAHSI HIS is a large-scale prototype at this time, but a proposal is under consideration by the National Science Foundation to operationalize HIS through a data facility, tentatively called the CUAHSI Water Data Center. Establishing HIS is an important step to enable research into human-environment interactions with water, but it is only one step. Other data structures will need to be made accessible and interoperable to support this research. Some data—such as two-dimensional GIS coverages—already have widely used standards for transmission and sharing. The US Federal government has long operated a clearinghouse for federal geographic data that is now being augmented with other services such as ArcGIS OnLine. Other data, such as gridded data, have standard storage formats (e.g., netCDF) but its native format is not convenient for water research. Some progress has been made to "transpose" these data sets from gridded data to a grid of virtual gages with time series. Such a format is more convenient for research of a limited spatial extent through time. Advances in relational data base structure now make it possible to serve very large data sets, such as radar-based precipitation grids, through HIS. Expanding the use of a standards-based services-oriented architecture will enable interdisciplinary research to proceed far more rapidly by putting data onto scientists' computers with a fraction of the effort previously required.

  17. The effectiveness of formative assessment with understanding by design (UbD) stages in forming habits of mind in prospective teachers

    NASA Astrophysics Data System (ADS)

    Gloria, R. Y.; Sudarmin, S.; Wiyanto; Indriyanti, D. R.

    2018-03-01

    Habits of mind are intelligent thinking dispositions that every individual needs to have, and it needs an effort to form them as expected. A behavior can be formed by continuous practice; therefore the student's habits of mind can also be formed and trained. One effort that can be used to encourage the formation of habits of mind is a formative assessment strategy with the stages of UbD (Understanding by Design), and a study needs to be done to prove it. This study aims to determine the contribution of formative assessment to the value of habits of mind owned by prospective teachers. The method used is a quantitative method with a quasi-experimental design. To determine the effectiveness of formative assessment with Ubd stages on the formation of habits of mind, correlation test and regression analysis were conducted in the formative assessment questionnaire consisting of three components, i.e. feed back, peer assessment and self assessment, and habits of mind. The result of the research shows that from the three components of Formative Assessment, only Feedback component does not show correlation to students’ habits of mind (r = 0.323). While peer assessment component (r = 0. 732) and self assessment component (r = 0.625), both indicate correlation. From the regression test the overall component of the formative assessment contributed to the habits of mind at 57.1%. From the result of the research, it can be concluded that the formative assessment with Ubd stages is effective and contributes in forming the student's habits of mind; the formative assessment components that contributed the most are the peer assessment and self assessment. The greatest contribution goes to the Thinking interdependently category.

  18. A Systematic Survey of Star Formation with the ORION MIDEX Mission

    NASA Astrophysics Data System (ADS)

    Scowen, P.; Morse, J.; Beasley, M.; Hester, J.; Windhorst, R.; Desch, S.; Jansen, R.; Calzetti, D.; Padgett, D.; Hartigan, P.; Oey, S.; Bally, J.; Gallagher, J.; O'Connell, R.; Kennicutt, R.; Lauer, T.

    2004-05-01

    The ORION MIDEX mission is a 1.2m UV-visual observatory orbiting at L2 that will conduct the first-ever high spatial resolution survey of a statistically significant sample of visible star-forming environments in the Solar neighborhood in emission lines and continuum. This survey will be used to characterize the star and planet forming environments within 2.5 kpc of the Sun, infer global properties and star formation history in these regions, understand how the environment influences the process of star and planet formation, and develop a classification scheme for star forming regions incorporating the earlier results. Based on these findings we will then conduct a similar high spatial resolution survey of large portions of the Magellanic Clouds, applying the classification scheme from local star forming environments to analogous regions in nearby galaxies, extending the classification scheme to regions that do not have nearby analogs but are common in external galaxies. The results from the local survey will allow us to infer characteristics of low mass star forming environments in the Magellanic Clouds, study the spatial distribution of star forming environments and analyze stellar population photometry to trace star formation history. Finally we will image a representative sample of external galaxies using the same filters used to characterize nearby star formation regions. We will map the distribution of star forming region type as a function of galactic environment for galaxies out to 5 Mpc to infer the distribution and history of low-mass star formation over galactic scales, characterize the stellar content and star formation history of galaxies, and relate these results to the current star forming environments in these galaxies. Ultimately we intend to use these diagnostics to extrapolate to star formation environments in the higher redshift Universe. We will also present an update on the technology development, project planning and operations for the proposed mission.

  19. Space-based Observations of Star Formation using ORION: THE MIDEX

    NASA Astrophysics Data System (ADS)

    Scowen, P.; Morse, J.; Beasley, M.; Hester, J.; Windhorst, R.; Jansen, R.; Lauer, T.; Danielson, E.; Sepulveda, C.; Olarte, G.; ORION MIDEX Science Team

    2003-12-01

    The ORION MIDEX mission is a 1.2m UV-visual observatory orbiting at L2 that will conduct the first-ever high spatial resolution survey of a statistically significant sample of visible star-forming environments in the Solar neighborhood in emission lines and continuum. This survey will be used to characterize the star and planet forming environments within 2.5 kpc of the Sun, infer global properties and star formation history in these regions, understand how the environment influences the process of star and planet formation, and develop a classification scheme for star forming regions incorporating the earlier results. Based on these findings we will then conduct a similar high spatial resolution survey of large portions of the Magellanic Clouds, applying the classification scheme from local star forming environments to analogous regions in nearby galaxies, extending the classification scheme to regions that do not have nearby analogs but are common in external galaxies. The results from the local survey will allow us to infer characteristics of low mass star forming environments in the Magellanic Clouds, study the spatial distribution of star forming environments and analyze stellar population photometry to trace star formation history. Finally we will image a representative sample of external galaxies using the same filters used to characterize nearby star formation regions. We will map the distribution of star forming region type as a function of galactic environment for galaxies out to 5 Mpc to infer the distribution and history of low-mass star formation over galactic scales, characterize the stellar content and star formation history of galaxies, and relate these results to the current star forming environments in these galaxies. Ultimately we intend to use these diagnostics to extrapolate to star formation environments in the higher redshift Universe. We will also present details on technology development, project planning and operations for the proposed mission.

  20. ORION: Hierarchical Space-based Observations of Star Formation, From Near to Far

    NASA Astrophysics Data System (ADS)

    Scowen, P. A.; Morse, J. A.; Beasley, M.; Veach, T.; ORION Science Team

    2005-12-01

    The ORION MIDEX mission is a 1.2m UV-visual observatory orbiting at L2 that will conduct the first-ever high spatial resolution survey of a statistically significant sample of visible star-forming environments in the Solar neighborhood in emission lines and continuum. This survey will be used to characterize the star and planet forming environments within 2.5 kpc of the Sun, infer global properties and star formation history in these regions, understand how the environment influences the process of star and planet formation, and develop a classification scheme for star forming regions incorporating the earlier results. Based on these findings we will then conduct a similar high spatial resolution survey of large portions of the Magellanic Clouds, applying the classification scheme from local star forming environments to analogous regions in nearby galaxies, extending the classification scheme to regions that do not have nearby analogs but are common in external galaxies. The results from the local survey will allow us to infer characteristics of low mass star forming environments in the Magellanic Clouds, study the spatial distribution of star forming environments and analyze stellar population photometry to trace star formation history. Finally we will image a representative sample of external galaxies using the same filters used to characterize nearby star formation regions. We will map the distribution of star forming region type as a function of galactic environment for galaxies out to 5 Mpc to infer the distribution and history of low-mass star formation over galactic scales, characterize the stellar content and star formation history of galaxies, and relate these results to the current star forming environments in these galaxies. Ultimately we intend to use these diagnostics to extrapolate to star formation environments in the higher redshift Universe. We will also present details on technology development, project planning and operations for the proposed mission.

  1. A Systematic Survey of Star Formation with the ORION MIDEX Mission

    NASA Astrophysics Data System (ADS)

    Scowen, P.; Morse, J.; Beasley, M.; Hester, J.; Windhorst, R.; Desch, S.; Jansen, R.; Calzetti, D.; Padgett, D.; Hartigan, P.; Oey, S.; Bally, J.; Gallagher, J.; O'Connell, R.; Kennicutt, R.; Lauer, T.; McCaughrean, M.

    2004-12-01

    The ORION MIDEX mission is a 1.2m UV-visual observatory orbiting at L2 that will conduct the first-ever high spatial resolution survey of a statistically significant sample of visible star-forming environments in the Solar neighborhood in emission lines and continuum. This survey will be used to characterize the star and planet forming environments within 2.5 kpc of the Sun, infer global properties and star formation history in these regions, understand how the environment influences the process of star and planet formation, and develop a classification scheme for star forming regions incorporating the earlier results. Based on these findings we will then conduct a similar high spatial resolution survey of large portions of the Magellanic Clouds, applying the classification scheme from local star forming environments to analogous regions in nearby galaxies, extending the classification scheme to regions that do not have nearby analogs but are common in external galaxies. The results from the local survey will allow us to infer characteristics of low mass star forming environments in the Magellanic Clouds, study the spatial distribution of star forming environments and analyze stellar population photometry to trace star formation history. Finally we will image a representative sample of external galaxies using the same filters used to characterize nearby star formation regions. We will map the distribution of star forming region type as a function of galactic environment for galaxies out to 5 Mpc to infer the distribution and history of low-mass star formation over galactic scales, characterize the stellar content and star formation history of galaxies, and relate these results to the current star forming environments in these galaxies. Ultimately we intend to use these diagnostics to extrapolate to star formation environments in the higher redshift Universe. We will also present an update on the technology development, project planning and operations for the proposed mission.

  2. Changing knowledge perspective in a changing world: The Adriatic multidisciplinary TDS approach

    NASA Astrophysics Data System (ADS)

    Bergamasco, Andrea; Carniel, Sandro; Nativi, Stefano; Signell, Richard P.; Benetazzo, Alvise; Falcieri, Francesco M.; Bonaldo, Davide; Minuzzo, Tiziano; Sclavo, Mauro

    2013-04-01

    The use and exploitation of the marine environment in recent years has been increasingly high, therefore calling for the need of a better description, monitoring and understanding of its behavior. However, marine scientists and managers often spend too much time in accessing and reformatting data instead of focusing on discovering new knowledge from the processes observed and data acquired. There is therefore the need to make more efficient our approach to data mining, especially in a world where rapid climate change imposes rapid and quick choices. In this context, it is mandatory to explore ways and possibilities to make large amounts of distributed data usable in an efficient and easy way, an effort that requires standardized data protocols, web services and standards-based tools. Following the US-IOOS approach, which has been adopted in many oceanographic and meteorological sectors, we present a CNR experience in the direction of setting up a national Italian IOOS framework (at the moment confined at the Adriatic Sea environment), using the THREDDS (THematic Real-time Environmental Distributed Data Services) Data Server (TDS). A TDS is a middleware designed to fill the gap between data providers and data users, and provides services allowing data users to find the data sets pertaining to their scientific needs, to access, visualize and use them in an easy way, without the need of downloading files to the local workspace. In order to achieve this results, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata so that the data can be read and searched for in a standard way. The TDS core is a NetCDF- Java Library implementing a Common Data Model (CDM), as developed by Unidata (http://www.unidata.ucar.edu), allowing the access to "array-based" scientific data. Climate and Forecast (CF) compliant NetCDF files can be read directly with no modification, while non-compliant files can be modified to meet appropriate metadata requirements. Once standardized in the CDM, the TDS makes datasets available through a series of web services such as OPeNDAP or Open Geospatial Consortium Web Coverage Service (WCS), allowing the data users to easily obtain small subsets from large datasets, and to quickly visualize their content by using tools such as GODIVA2 or Integrated Data Viewer (IDV). In addition, an ISO metadata service is available through the TDS that can be harvested by catalogue broker services (e.g. GI-cat) to enable distributed search across federated data servers. Example of TDS datasets from oceanographic evolutions (currents, waves, sediments...) will be described and discussed, while some examples can be accessed directly to the Venice site http://tds.ve.ismar.cnr.it:8080/thredds/catalog.html (Bergamasco et al., 2012) also within the framework of RITMARE Project. References Bergamasco A., Benetazzo A., Carniel S., Falcieri F., Minuzzo T., Signell R.P. and M. Sclavo, 2012. From interoperability to knowledge discovery using large model datasets in the marine environment: the THREDDS Data Server example. Advances in Oceanography and Limnology, 3(1), 41-50. DOI:10.1080/19475721.2012.669637

  3. Fluorine-containing composition for forming anti-reflection film on resist surface and pattern formation method

    DOEpatents

    Nishi, Mineo; Makishima, Hideo

    1996-01-01

    A composition for forming anti-reflection film on resist surface which comprises an aqueous solution of a water soluble fluorine compound, and a pattern formation method which comprises the steps of coating a photoresist composition on a substrate; coating the above-mentioned composition for forming anti-reflection film; exposing the coated film to form a specific pattern; and developing the photoresist, are provided. Since the composition for forming anti-reflection film can be coated on the photoresist in the form of an aqueous solution, not only the anti-reflection film can be formed easily, but also, the film can be removed easily by rinsing with water or alkali development. Therefore, by the pattern formation method according to the present invention, it is possible to form a pattern easily with a high dimensional accuracy.

  4. How Does Dense Molecular Gas Contribute to Star Formation in the Starburst Galaxy NGC 2146?

    NASA Astrophysics Data System (ADS)

    Wofford, Alia

    2017-01-01

    The starburst galaxy NGC 2146 is believed to have been formed approximately 800 Myr ago, when two galaxies collided with each other possibly leading to a burst of star formation. NGC 2146 is known as a starburst galaxy for the high frequency of star formation going on in its molecular clouds. These clouds serve as nurseries for star formation to occur. Hydrogen Cyanide (HCN) and Carbon monoxide (CO) are molecules found in molecular gas clouds. HCN molecules are tracers for high density star forming gas. Whereas, CO molecules are tracers for low density star forming gas. In this project, we are observing these two molecules and their proximity to where the stars are forming in the galaxy to determine if the star formation is occurring in the same area as the high and low density molecular gas areas in starburst galaxy NGC 2146.

  5. Open data used in water sciences - Review of access, licenses and understandability

    NASA Astrophysics Data System (ADS)

    Falkenroth, Esa; Lagerbäck Adolphi, Emma; Arheimer, Berit

    2016-04-01

    The amount of open data available for hydrology research is continually growing. In the EU-funded project SWITCH-ON (Sharing Water-related Information to Tackle Changes in the Hydrosphere - for Operational Needs: www.water-switch-on.eu), we are addressing water concerns by exploring and exploiting the untapped potential of these new open data. This work is enabled by many ongoing efforts to facilitate the use of open data. For instance, a number of portals provide the means to search for open data sets and open spatial data services (such as the GEOSS Portal, INSPIRE community geoportal or various Climate Services and public portals). However, in general, many research groups in water sciences still hesitate in using this open data. We therefore examined some limiting factors. Factors that limit usability of a dataset include: (1) accessibility, (2) understandability and (3) licences. In the SWITCH-ON project we have developed a search tool for finding and accessing data with relevance to water science in Europe, as the existing ones are not addressing data needs in water sciences specifically. The tool is filled with some 9000 sets of metadata and each one is linked to water related key-words. The keywords are based on the ones developed within the CUAHSI community in USA, but extended with non-hydrosphere topics, additional subclasses and only showing keywords actually having data. Access to data sets: 78% of the data is directly accessible, while the rest is either available after registration and request, or through a web client for visualisation but without direct download. However, several data sets were found to be inaccessible due to server downtime, incorrect links or problems with the host database management system. One possible explanation for this could be that many datasets have been assembled by research project that no longer are funded. Hence, their server infrastructure would be less maintained compared to large-scale operational services. Understandability of the data sets: 13 major formats were found, but the major issues encountered were due to incomplete documentation or metadata and problems with decoding binary formats. Ideally, open data sets should be represented in well-known formats and they should be accompanied with sufficient documentation so the data set can be understood. The development efforts on Water ML and NETCDF and other standards could improve understandability of data sets over time but in this review, only a few data sets were provided in these formats. Instead, the majority of datasets were stored in various text-based or binary formats or even document-oriented formats such as PDF. Other disciplines such as meteorology have long-standing traditions of operational data exchange format whereas hydrology research is still quite fragmented and the data exchange is usually done on a case-by-case basis. With the increased sharing of open data there is a good chance the situation will improve for data sets used also in water sciences. License issue: Only 3% of the data is completely free to use, while 57% can be used for non-commercial purposes or research. A high number of datasets did not have a clear statement on terms of use and limitation for access. In most cases the provider could be contacted regarding licensing issues.

  6. Integrated Ocean Profile Data Delivery for Operations and Climate Research

    NASA Astrophysics Data System (ADS)

    Sun, C. L.; Soreide, N. N.

    2006-12-01

    An end-to-end data and information system for delivering integrated real-time and historical datasets is presented in this paper. The purposes of this paper are: (1) to illustrate the procedures of quality control and loading ocean profile data into the U.S. National Oceanographic Data Center (NODC) ocean database and (2) to facilitate the development and provision of a wide variety of useful data, analyses, and information products for operations and climate research. The NODC currently focuses on acquiring, processing, and distributing ocean profile data collected by two operational global ocean observing systems: Argo Profiling Network and Global Temperature-Salinity Profile Program (GTSPP). The two data streams contain upper ocean temperature and salinity data mainly from profiling floats, expendable bathythermographs (XBTs) but also from conductivity-temperature-depths (CTDs) and bottles. Argo has used resources from 23 or so countries to make unprecedented in-situ observations of the global ocean. All Argo data are publicly available in near real-time via the Global Telecommunications System (GTS) and in scientifically quality-controlled form with a few months delay. The NODC operates the Global Argo Data Repository for long-term archiving Argo data and serves the data in the NODC version of Argo netCDF and tab- delimited spreadsheet text formats to the public through the NODC Web site at http://www.nodc.noaa.gov/argo/. The GTSPP is a cooperative international program. It maintains a global ocean T-S resource with data that are both up-to-date and of the highest quality possible. Both real-time data transmitted over the GTS, and delayed- mode data received by contribution countries are acquired and quality controlled by the Marine Environmental Data Service, Canada and is eventually incorporated into a continuously managed database maintained by the NODC. Information and data are made publicly available at http://www.nodc.noaa.gov/GTSPP/ . Web-based tools are developed for allowing users on the Web to query and subset the data by parameter, location, time, and other attributes such as instrument types and quality flags. Desktop applications with capabilities of exploring data from real-time data streams and integrating the data streams with archives across the Internet are available for users who have a high bandwidth Internet connection. Alternatively, users without high-speed network access can order CD/DVD-ROMs from the NODC that contain the integrated dataset and then use software over potentially low-bandwidth network connection to periodically update the CD/DVD-ROM-based archive with new data

  7. [Formation of microbial biofilms in causative agents of acute and chronic pyelonephritis].

    PubMed

    Lagun, L V; Atanasova, Iu V; Tapal'skiĭ, D V

    2013-01-01

    Study the intensity of formation of microbial biofilms by Pseudomonas aeruginosa, Escherichia coli, Klebsiella pneumoniae, Staphylococcus aureus strains isolated during various forms of pyelonephritis. 150 clinical isolates of microorganisms isolated from urine ofpatientswith acute and chronic pyelonephritiswere included into the study. Determination of intensity of film-formation was carried out by staining of the formed biofilms by crystal violet with consequent extraction of the dye and measurement of its concentration in washout solution. Among causative agents ofpyelonephritis P. aeruginosa isolates had the maximum film-forming ability. The intensity of biofilm formation of these isolates was 2-3 time higher than staphylococcus and enterobacteria strains. Strains isolated from patients with chronic pyelonephritis by ability to form biofilms significantly surpassed strains isolated from acute pyelonephritis patients. A higher ability to form microbial biofilms for microorganisms--causative agents of pyelonephritis progressing against the background ofurolithiasis was noted. The ability to form biofilms is determined by both causative agent species and character of the infectious process in which this microorganism participates. Intensive formation of biofilms by E. coli, P. aeruginosa, K. pneumoniae, S. aureus clinical isolates may be an important factor of chronization of urinary tract infections.

  8. Sulfur barrier for use with in situ processes for treating formations

    DOEpatents

    Vinegar, Harold J [Bellaire, TX; Christensen, Del Scot [Friendswood, TX

    2009-12-15

    Methods for forming a barrier around at least a portion of a treatment area in a subsurface formation are described herein. Sulfur may be introduced into one or more wellbores located inside a perimeter of a treatment area in the formation having a permeability of at least 0.1 darcy. At least some of the sulfur is allowed to move towards portions of the formation cooler than the melting point of sulfur to solidify the sulfur in the formation to form the barrier.

  9. ANTIBIOFILM EFFECTS of Citrus limonum and Zingiber officinale Oils on BIOFILM FORMATION of Klebsiella ornithinolytica, Klebsiella oxytoca and Klebsiella terrigena SPECIES.

    PubMed

    Avcioglu, Nermin Hande; Sahal, Gulcan; Bilkay, Isil Seyis

    2016-01-01

    Microbial cells growing in biofilms, play a huge role in the spread of antimicrobial resistance. In this study, biofilm formation of Klebsiella strains belonging to 3 different Klebsiella species ( K. ornithinolytica , K. oxytoca and K. terrigena ), cooccurences' effect on biofilm formation amount and anti-biofilm effects of Citrus limon and Zingiber officinale essential oils on biofilm formations of highest biofilm forming K. ornithinolytica , K. oxytoca and K. terrigena strains were determined. Anti-biofilm effects of Citrus limon and Zingiber officinale essential oils on biofilm formations of highest biofilm forming K. ornithinolytica , K. oxytoca and K. terrigena strains were investigated. 57% of K. ornithinolytica strains and 50% of K. oxytoca strains were found as Strong Biofilm Forming (SBF), there wasn't any SBF strain in K. terrigena species. In addition to this, clinical materials of urine and sperm were found as the most frequent clinical materials for strong biofilm forming K. ornithinolytica and K. oxytoca isolations respectively (63%; 100%) Secondly, all K. ornithinolytica strains isolated from surgical intensive care unit and all K. oxytoca strains isolated from service units of urology were found as SBF. Apart from these, although the amount of biofilm, formed by co-occurence of K. ornithinolytica - K. oxytoca and K. oxytoca - K. terrigena were more than the amount ofbiofilm formed by themselves separately, biofilm formation amount of co-occurrence of K. ornitholytica - K. terrigena strains was lower than biofilm formation amount of K. ornithinolytica but higher than biofilm formation amount of K. terrigena . The antibiofilm effects of Citrus limonum and Zingiber officinale essential oils could be used against biofilm Klebsiella aquired infections.

  10. ANTIBIOFILM EFFECTS of Citrus limonum and Zingiber officinale Oils on BIOFILM FORMATION of Klebsiella ornithinolytica, Klebsiella oxytoca and Klebsiella terrigena SPECIES

    PubMed Central

    Avcioglu, Nermin Hande; Sahal, Gulcan; Bilkay, Isil Seyis

    2016-01-01

    Background: Microbial cells growing in biofilms, play a huge role in the spread of antimicrobial resistance. In this study, biofilm formation of Klebsiella strains belonging to 3 different Klebsiella species (K. ornithinolytica, K. oxytoca and K. terrigena), cooccurences’ effect on biofilm formation amount and anti-biofilm effects of Citrus limon and Zingiber officinale essential oils on biofilm formations of highest biofilm forming K. ornithinolytica, K. oxytoca and K. terrigena strains were determined. Materials and Methods: Anti-biofilm effects of Citrus limon and Zingiber officinale essential oils on biofilm formations of highest biofilm forming K. ornithinolytica, K. oxytoca and K. terrigena strains were investigated. Results: 57% of K. ornithinolytica strains and 50% of K. oxytoca strains were found as Strong Biofilm Forming (SBF), there wasn’t any SBF strain in K. terrigena species. In addition to this, clinical materials of urine and sperm were found as the most frequent clinical materials for strong biofilm forming K. ornithinolytica and K. oxytoca isolations respectively (63%; 100%) Secondly, all K. ornithinolytica strains isolated from surgical intensive care unit and all K. oxytoca strains isolated from service units of urology were found as SBF. Apart from these, although the amount of biofilm, formed by co-occurence of K. ornithinolytica - K. oxytoca and K. oxytoca - K. terrigena were more than the amount ofbiofilm formed by themselves separately, biofilm formation amount of co-occurrence of K. ornitholytica - K. terrigena strains was lower than biofilm formation amount of K. ornithinolytica but higher than biofilm formation amount of K. terrigena. Conclusion: The antibiofilm effects of Citrus limonum and Zingiber officinale essential oils could be used against biofilm Klebsiella aquired infections. PMID:28480361

  11. Device for temporarily closing duct-formers in well completion apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zandmer, H.M.; Zandmer, S.M.

    A duct-forming device is disclosed for use in a well completion apparatus of the kind, wherein a bore hole casing is positioned in a bore hole and duct-forming devices of alkali- and acid resistant metal-such as steel-are secured at spaced levels to the casing in alignment with holes machined in the casing wall. In accordance with the invention, a closure device is arranged within the duct-forming device which permits flow of predetermined amounts of liquid, such as acid, from the interior of the casing through the duct-forming device and into the producing formation, while gradually being moved by the liquidmore » into a position in which such fluid flow is prevented. After the fluid flow has been stopped by the closure device and when the formation pressure exceeds the pressure within the duct-forming device and the casing, fluid from the formation then forces the closure device toward and into the casing space to permit thereafter free flow of formation fluid into the duct-forming device and the casing or of pressurized treatment liquid from the casing into the formation. The inventive arrangement permits inter alia the establishment of a sufficient and substantially uniform feeding rate of treatment liquid, such as acid, from the casing into the producing formation through all the duct-formers in preparation for subsequent acidification or other treatments, such as sand fracking.« less

  12. Method for explosive expansion toward horizontal free faces for forming an in situ oil shale retort

    DOEpatents

    Ricketts, Thomas E.

    1980-01-01

    Formation is excavated from within a retort site in formation containing oil shale for forming a plurality of vertically spaced apart voids extending horizontally across different levels of the retort site, leaving a separate zone of unfragmented formation between each pair of adjacent voids. Explosive is placed in each zone, and such explosive is detonated in a single round for forming an in situ retort containing a fragmented permeable mass of formation particles containing oil shale. The same amount of formation is explosively expanded upwardly and downwardly toward each void. A horizontal void excavated at a production level has a smaller horizontal cross-sectional area than a void excavated at a lower level of the retort site immediately above the production level void. Explosive in a first group of vertical blast holes is detonated for explosively expanding formation downwardly toward the lower void, and explosive in a second group of vertical blast holes is detonated in the same round for explosively expanding formation upwardly toward the lower void and downwardly toward the production level void for forming a generally T-shaped bottom of the fragmented mass.

  13. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  14. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  15. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  16. 12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...

  17. 17 CFR 145.7 - Requests for Commission records and copies thereof.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... (including electronic formats) of the response. The Commission will accommodate requesters as to form or... or format of the response, the Commission will respond in the form or format in which the document is... requests/oral requests. (1) The Commission cannot assure that a timely or satisfactory response will be...

  18. 17 CFR 145.7 - Requests for Commission records and copies thereof.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... (including electronic formats) of the response. The Commission will accommodate requesters as to form or... or format of the response, the Commission will respond in the form or format in which the document is... requests/oral requests. (1) The Commission cannot assure that a timely or satisfactory response will be...

  19. 17 CFR 145.7 - Requests for Commission records and copies thereof.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (including electronic formats) of the response. The Commission will accommodate requesters as to form or... or format of the response, the Commission will respond in the form or format in which the document is... requests/oral requests. (1) The Commission cannot assure that a timely or satisfactory response will be...

  20. Effect of Si Content on Oxide Formation on Surface of Molten Fe-Cr-C Alloy Bath During Oxygen Top Blowing

    NASA Astrophysics Data System (ADS)

    Mihara, Ryosuke; Gao, Xu; Kim, Sun-joong; Ueda, Shigeru; Shibata, Hiroyuki; Seok, Min Oh; Kitamura, Shin-ya

    2018-02-01

    Using a direct observation experimental method, the oxide formation behavior on the surface of Fe-Cr-5 mass pct C-Si alloy baths during decarburization by a top-blown Ar-O2 mixture was studied. The effects of the initial Si and Cr content of the alloy, temperature, and oxygen feed ratio on oxide formation were investigated. The results showed that, for alloys without Si, oxide particles, unstable oxide films, and stable oxide films formed sequentially. The presence of Si in the alloy changed the formation behavior of stable oxide film, and increased the crucial C content when stable oxide film started to form. Increasing the temperature, decreasing the initial Cr content, and increasing the ratio of the diluting gas decreased the critical C content at which a stable oxide film started to form. In addition, the P CO and a_{{{Cr}2 {O}3 }} values at which oxides started to form were estimated using Hilty's equation and the equilibrium relation to understand the formation conditions and the role of each parameter in oxide formation.

  1. On The History and Future of Cosmic Planet Formation

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter

    2016-03-01

    We combine constraints on galaxy formation histories with planet formation models, yielding the Earth-like and giant planet formation histories of the Milky Way and the Universe as a whole. In the Hubble Volume (1013 Mpc3), we expect there to be ~1020 Earth-like and ~1020 giant planets; our own galaxy is expected to host ~109 and ~1010 Earth-like and giant planets, respectively. Proposed metallicity thresholds for planet formation do not significantly affect these numbers. However, the metallicity dependence for giant planets results in later typical formation times and larger host galaxies than for Earth-like planets. The Solar System formed at the median age for existing giant planets in the Milky Way, and consistent with past estimates, formed after 80% of Earth-like planets. However, if existing gas within virialised dark matter haloes continues to collapse and form stars and planets, the Universe will form over 10 times more planets than currently exist. We show that this would imply at least a 92% chance that we are not the only civilisation the Universe will ever have, independent of arguments involving the Drake Equation.

  2. Effect of angular momentum alignment and strong magnetic fields on the formation of protostellar discs

    NASA Astrophysics Data System (ADS)

    Gray, William J.; McKee, Christopher F.; Klein, Richard I.

    2018-01-01

    Star-forming molecular clouds are observed to be both highly magnetized and turbulent. Consequently, the formation of protostellar discs is largely dependent on the complex interaction between gravity, magnetic fields, and turbulence. Studies of non-turbulent protostellar disc formation with realistic magnetic fields have shown that these fields are efficient in removing angular momentum from the forming discs, preventing their formation. However, once turbulence is included, discs can form in even highly magnetized clouds, although the precise mechanism remains uncertain. Here, we present several high-resolution simulations of turbulent, realistically magnetized, high-mass molecular clouds with both aligned and random turbulence to study the role that turbulence, misalignment, and magnetic fields have on the formation of protostellar discs. We find that when the turbulence is artificially aligned so that the angular momentum is parallel to the initial uniform field, no rotationally supported discs are formed, regardless of the initial turbulent energy. We conclude that turbulence and the associated misalignment between the angular momentum and the magnetic field are crucial in the formation of protostellar discs in the presence of realistic magnetic fields.

  3. N-terminal Domains Elicit Formation of Functional Pmel17 Amyloid Fibrils*

    PubMed Central

    Watt, Brenda; van Niel, Guillaume; Fowler, Douglas M.; Hurbain, Ilse; Luk, Kelvin C.; Stayrook, Steven E.; Lemmon, Mark A.; Raposo, Graça; Shorter, James; Kelly, Jeffery W.; Marks, Michael S.

    2009-01-01

    Pmel17 is a transmembrane protein that mediates the early steps in the formation of melanosomes, the subcellular organelles of melanocytes in which melanin pigments are synthesized and stored. In melanosome precursor organelles, proteolytic fragments of Pmel17 form insoluble, amyloid-like fibrils upon which melanins are deposited during melanosome maturation. The mechanism(s) by which Pmel17 becomes competent to form amyloid are not fully understood. To better understand how amyloid formation is regulated, we have defined the domains within Pmel17 that promote fibril formation in vitro. Using purified recombinant fragments of Pmel17, we show that two regions, an N-terminal domain of unknown structure and a downstream domain with homology to a polycystic kidney disease-1 repeat, efficiently form amyloid in vitro. Analyses of fibrils formed in melanocytes confirm that the polycystic kidney disease-1 domain forms at least part of the physiological amyloid core. Interestingly, this same domain is also required for the intracellular trafficking of Pmel17 to multivesicular compartments within which fibrils begin to form. Although a domain of imperfect repeats (RPT) is required for fibril formation in vivo and is a component of fibrils in melanosomes, RPT is not necessary for fibril formation in vitro and in isolation is unable to adopt an amyloid fold in a physiologically relevant time frame. These data define the structural core of Pmel17 amyloid, imply that the RPT domain plays a regulatory role in timing amyloid conversion, and suggest that fibril formation might be physically linked with multivesicular body sorting. PMID:19840945

  4. Scanning electron microscope study of Apollo 15 green glass

    NASA Technical Reports Server (NTRS)

    Mckay, D. S.; Clanton, U. S.; Ladle, G.

    1973-01-01

    Apollo 15 green glass droplets and related forms show a variety of low velocity impact features which occurred at the time of formation of the droplets. Composite forms, which consist of a crystallized core on which mounds of glass adhere, indicate a sequence of core formation and crystallization, followed by impact of molten droplets. The complicated and time dependent texture and morphology of the green glass forms are best explained by formation in a volcanic lava fountain rather than by meteorite impact.

  5. Gas seal for an in situ oil shale retort and method of forming thermal barrier

    DOEpatents

    Burton, III, Robert S.

    1982-01-01

    A gas seal is provided in an access drift excavated in a subterranean formation containing oil shale. The access drift is adjacent an in situ oil shale retort and is in gas communication with the fragmented permeable mass of formation particles containing oil shale formed in the in situ oil shale retort. The mass of formation particles extends into the access drift, forming a rubble pile of formation particles having a face approximately at the angle of repose of fragmented formation. The gas seal includes a temperature barrier which includes a layer of heat insulating material disposed on the face of the rubble pile of formation particles and additionally includes a gas barrier. The gas barrier is a gas-tight bulkhead installed across the access drift at a location in the access drift spaced apart from the temperature barrier.

  6. Using Browser Notebooks to Analyse Big Atmospheric Data-sets in the Cloud

    NASA Astrophysics Data System (ADS)

    Robinson, N.; Tomlinson, J.; Arribas, A.; Prudden, R.

    2016-12-01

    We are presenting an account of our experience building an ecosystem for the analysis of big atmospheric data-sets. By using modern technologies we have developed a prototype platform which is scaleable and capable of analysing very large atmospheric datasets. We tested different big-data ecosystems such as Hadoop MapReduce, Spark and Dask, in order to find the one which was best suited for analysis of multidimensional binary data such as NetCDF. We make extensive use of infrastructure-as-code and containerisation to provide a platform which is reusable, and which can scale to accommodate changes in demand. We make this platform readily accessible using browser based notebooks. As a result, analysts with minimal technology experience can, in tens of lines of Python, make interactive data-visualisation web pages, which can analyse very large amounts of data using cutting edge big-data technology

  7. Formate-assisted pyrolysis

    DOEpatents

    DeSisto, William Joseph; Wheeler, Marshall Clayton; van Heiningen, Adriaan R. P.

    2015-03-17

    The present invention provides, among other thing, methods for creating significantly deoxygenated bio-oils form biomass including the steps of providing a feedstock, associating the feedstock with an alkali formate to form a treated feedstock, dewatering the treated feedstock, heating the dewatered treated feedstock to form a vapor product, and condensing the vapor product to form a pyrolysis oil, wherein the pyrolysis oil contains less than 30% oxygen by weight.

  8. Effect of reaction time on the formation of disinfection byproducts

    USGS Publications Warehouse

    Rathbun, R.E.

    1997-01-01

    The effect of reaction time on the trihalomethane and nonpurgeable total organic-halide formation potentials was determined by chlorinating water samples from the Mississippi, Missouri, and Ohio Rivers. Samples were collected for three seasons at 12 locations on the Mississippi from Minneapolis, Minnesota, to New Orleans, Louisiana, and on the Missouri and Ohio 1.6 kilometers above their confluences with the Mississippi. Both types of compounds formed rapidly during the initial stages of the reaction-time period, with formation rates decreasing with time. The ratio of the nonpurgeable total organic-halide and trihalomethane concentrations decreased with time, with the nonpurgeable total organic-halide compounds forming faster during the first stages of the time period and the trihalomethane compounds forming faster during the latter stages of the time period. Variation with distance along the Mississippi River of the formation rates approximately paralleled the variation of the dissolved organic carbon concentration, indicating that the rates of formation, as well as the concentrations of the compounds formed, depended on the dissolved organic carbon concentration.

  9. A model for the biological precipitation of Precambrian iron-formation

    NASA Technical Reports Server (NTRS)

    Laberge, G. L.

    1986-01-01

    A biological model for the precipitation of Precambrian iron formations is presented. Assuming an oxygen deficient atmosphere and water column to allow sufficient Fe solubility, it is proposed that local oxidizing environments, produced biologically, led to precipitation of iron formations. It is further suggested that spheroidal structures about 30 mm in diameter, which are widespread in low grade cherty rion formations, are relict forms of the organic walled microfossil Eosphaera tylerii. The presence of these structures suggests that the organism may have had a siliceous test, which allowed sufficient rigidity for accumulation and preservation. The model involves precipitation of ferric hydrates by oxidation of iron in the photic zone by a variety of photosynthetic organisms. Silica may have formed in the frustules of silica secreting organisms, including Eosphaera tylerii. Iron formates formed, therefore, by a sediment rain of biologically produced ferric hydrates and silica and other organic material. Siderite and hematite formed diagenetically on basin floors, and subsequent metamorphism produced magnetite and iron silicates.

  10. Measurements and Analyses of Science Interest in a School District.

    ERIC Educational Resources Information Center

    Bauman, Daniel Joseph

    Reported is a study to investigate the relationship of science interest with three instrument formats: scaled rating, block, and card sort. The scaled rating format was the same form as that used in Likert instruments. The block format was the graphic form used in the typical semantic differential where a single stem or concept is followed by…

  11. Methods for forming wellbores in heated formations

    DOEpatents

    Guimerans, Rosalvina Ramona; Mansure, Arthur James

    2012-09-25

    A method for forming a wellbore in a heated formation includes flowing liquid cooling fluid to a bottom hole assembly in a wellbore in a heated formation. At least a portion of the liquid cooling fluid is vaporized at or near a region to be cooled. Vaporizing the liquid cooling fluid absorbs heat from the region to be cooled.

  12. Laboratory formation of non-cementing, methane hydrate-bearing sands

    USGS Publications Warehouse

    Waite, William F.; Bratton, Peter M.; Mason, David H.

    2011-01-01

    Naturally occurring hydrate-bearing sands often behave as though methane hydrate is acting as a load-bearing member of the sediment. Mimicking this behavior in laboratory samples with methane hydrate likely requires forming hydrate from methane dissolved in water. To hasten this formation process, we initially form hydrate in a free-gas-limited system, then form additional hydrate by circulating methane-supersaturated water through the sample. Though the dissolved-phase formation process can theoretically be enhanced by increasing the pore pressure and flow rate and lowering the sample temperature, a more fundamental concern is preventing clogs resulting from inadvertent methane bubble formation in the circulation lines. Clog prevention requires careful temperature control throughout the circulation loop.

  13. Stimulation of Changes, Collective Commitment and The Patterns of Group Formation in Community Development in South Sulawesi

    NASA Astrophysics Data System (ADS)

    Saleh, Syafiuddin

    2018-05-01

    This study aims to examine the pattern of group formation, related to the stimulation of change through the empowerment of farmers and poor fishermen The pattern of group formation is the basis for sustainable development. The research method used is qualitative descriptive method and relevant research type such as case study and triangulasi. The results of the study showed that (1) stimulation of changes made through development programs or community empowerment in the areas studied both among farm households and poor fishermen households for some programs received positive response from farmers and fishermen. However, the collective commitment to the breeding is relatively weak, since the group formed in each program is not done through good planning and concepts. (2) there are two patterns of group formation that are natural and formed formations initiated by outsiders. Groups that are naturally formed are more institutionalized and have characteristics such as intense and relatively routine interaction, strong mutual trust, and have a common form or mechanism shared for common purposes. The group can form the basis for sustainable development in improving the welfare of the poor.

  14. Efficient formation of heterokaryotic sclerotia in the filamentous fungus Aspergillus oryzae.

    PubMed

    Wada, Ryuta; Jin, Feng Jie; Koyama, Yasuji; Maruyama, Jun-ichi; Kitamoto, Katsuhiko

    2014-01-01

    Heterokaryon formation by hyphal fusion occurs during a sexual/parasexual cycle in filamentous fungi, and therefore, it is biotechnologically important for crossbreeding. In the industrial filamentous fungus Aspergillus oryzae, a parasexual cycle has been reported, and it was recently suggested that sexual reproduction should be possible. However, as A. oryzae enters into hyphal fusion with a much lower frequency than Neurospora crassa, the process of heterokaryon formation has not been extensively characterized in A. oryzae. Here, we developed a detection system for heterokaryon formation by expressing red or green fluorescent proteins in nuclei and conferring uridine/uracil or adenine auxotrophy to MAT1-1 and MAT1-2 strains of A. oryzae. The heterokaryon formation of A. oryzae was investigated in paired culture using the genetically modified strains. No sclerotial formation was observed in the hyphal contact regions of the two strains with the same auxotrophy, whereas numerous sclerotia were formed between the strains with different auxotrophies. In most of the formed sclerotia, the uridine/uracil and adenine auxotrophies were complemented, and both red and green fluorescence were detected, indicating that heterokaryotic fusants were formed by hyphal fusion before or during sclerotial formation. Moreover, overexpressing the sclR gene, which encodes a transcription factor promoting sclerotial formation, increased the number of heterokaryotic sclerotia formed between the two auxotrophic strains. Notably, these effects in sclerotial formation of heterokaryotic fusants were observed independently of the mating type pairing combinations. Taken together, these findings demonstrated that paring of different auxotrophs and sclR overexpression promote the formation of heterokaryotic sclerotia in A. oryzae.

  15. Measuring star formation rates in blue galaxies

    NASA Technical Reports Server (NTRS)

    Gallagher, John S., III; Hunter, Deidre A.

    1987-01-01

    The problems associated with measurements of star formation rates in galaxies are briefly reviewed, and specific models are presented for determinations of current star formation rates from H alpha and Far Infrared (FIR) luminosities. The models are applied to a sample of optically blue irregular galaxies, and the results are discussed in terms of star forming histories. It appears likely that typical irregular galaxies are forming stars at nearly constant rates, although a few examples of systems with enhanced star forming activity are found among HII regions and luminous irregular galaxies.

  16. Disrupting the cortical actin cytoskeleton points to two distinct mechanisms of yeast [PSI+] prion formation

    PubMed Central

    Speldewinde, Shaun H.; Tuite, Mick F.

    2017-01-01

    Mammalian and fungal prions arise de novo; however, the mechanism is poorly understood in molecular terms. One strong possibility is that oxidative damage to the non-prion form of a protein may be an important trigger influencing the formation of its heritable prion conformation. We have examined the oxidative stress-induced formation of the yeast [PSI+] prion, which is the altered conformation of the Sup35 translation termination factor. We used tandem affinity purification (TAP) and mass spectrometry to identify the proteins which associate with Sup35 in a tsa1 tsa2 antioxidant mutant to address the mechanism by which Sup35 forms the [PSI+] prion during oxidative stress conditions. This analysis identified several components of the cortical actin cytoskeleton including the Abp1 actin nucleation promoting factor, and we show that deletion of the ABP1 gene abrogates oxidant-induced [PSI+] prion formation. The frequency of spontaneous [PSI+] prion formation can be increased by overexpression of Sup35 since the excess Sup35 increases the probability of forming prion seeds. In contrast to oxidant-induced [PSI+] prion formation, overexpression-induced [PSI+] prion formation was only modestly affected in an abp1 mutant. Furthermore, treating yeast cells with latrunculin A to disrupt the formation of actin cables and patches abrogated oxidant-induced, but not overexpression-induced [PSI+] prion formation, suggesting a mechanistic difference in prion formation. [PIN+], the prion form of Rnq1, localizes to the IPOD (insoluble protein deposit) and is thought to influence the aggregation of other proteins. We show Sup35 becomes oxidized and aggregates during oxidative stress conditions, but does not co-localize with Rnq1 in an abp1 mutant which may account for the reduced frequency of [PSI+] prion formation. PMID:28369054

  17. Search of massive star formation with COMICS

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshiko K.

    2004-04-01

    Mid-infrared observations is useful for studies of massive star formation. Especially COMICS offers powerful tools: imaging survey of the circumstellar structures of forming massive stars such as massive disks and cavity structures, mass estimate from spectroscopy of fine structure lines, and high dispersion spectroscopy to census gas motion around formed stars. COMICS will open the next generation infrared studies of massive star formation.

  18. The formation of the solar system

    NASA Astrophysics Data System (ADS)

    Pfalzner, S.; Davies, M. B.; Gounelle, M.; Johansen, A.; Münker, C.; Lacerda, P.; Portegies Zwart, S.; Testi, L.; Trieloff, M.; Veras, D.

    2015-06-01

    The solar system started to form about 4.56 Gyr ago and despite the long intervening time span, there still exist several clues about its formation. The three major sources for this information are meteorites, the present solar system structure and the planet-forming systems around young stars. In this introduction we give an overview of the current understanding of the solar system formation from all these different research fields. This includes the question of the lifetime of the solar protoplanetary disc, the different stages of planet formation, their duration, and their relative importance. We consider whether meteorite evidence and observations of protoplanetary discs point in the same direction. This will tell us whether our solar system had a typical formation history or an exceptional one. There are also many indications that the solar system formed as part of a star cluster. Here we examine the types of cluster the Sun could have formed in, especially whether its stellar density was at any stage high enough to influence the properties of today’s solar system. The likelihood of identifying siblings of the Sun is discussed. Finally, the possible dynamical evolution of the solar system since its formation and its future are considered.

  19. On the history and future of cosmic planet formation

    NASA Astrophysics Data System (ADS)

    Behroozi, Peter; Peeples, Molly S.

    2015-12-01

    We combine constraints on galaxy formation histories with planet formation models, yielding the Earth-like and giant planet formation histories of the Milky Way and the Universe as a whole. In the Hubble volume (1013 Mpc3), we expect there to be ˜1020 Earth-like and ˜1020 giant planets; our own galaxy is expected to host ˜109 and ˜1010 Earth-like and giant planets, respectively. Proposed metallicity thresholds for planet formation do not significantly affect these numbers. However, the metallicity dependence for giant planets results in later typical formation times and larger host galaxies than for Earth-like planets. The Solar system formed at the median age for existing giant planets in the Milky Way, and consistent with past estimates, formed after 80 per cent of Earth-like planets. However, if existing gas within virialized dark matter haloes continues to collapse and form stars and planets, the Universe will form over 10 times more planets than currently exist. We show that this would imply at least a 92 per cent chance that we are not the only civilization the Universe will ever have, independent of arguments involving the Drake equation.

  20. Modelling of Criegee Intermediates using the 3-D global model, STOCHEM-CRI and investigating their global impacts on Secondary Organic Aerosol formation

    NASA Astrophysics Data System (ADS)

    Khan, M. Anwar H.; Cooke, Michael; Utembe, Steve; Archibald, Alexander; Derwent, Richard; Jenkin, Mike; Lyons, Kyle; Kent, Adam; Percival, Carl; Shallcross, Dudley E.

    2016-04-01

    Gas phase reactions of ozone with unsaturated compounds form stabilized Criegee intermediates (sCI) which play an important role in controlling the budgets of many tropospheric species including OH, organic acids and secondary organic aerosols (SOA). Recently sCI has been proposed to play a significant role in atmospheric sulfate and nitrate chemistry by forming sulfuric acid (promoter of aerosol formation) and nitrate radical (a powerful oxidizing agent). sCI can also undergo association reactions with water, alcohols, and carboxylic acids to form hydroperoxides and with aldehydes and ketones to form secondary ozonides. The products from these reactions are low volatility compounds which can contribute to the formation of SOA. The importance of plant emitted alkenes (isoprene, monoterpenes, sesquiterpenes) in the production of SOA through sCI formation have already been investigated in laboratory studies. However, the SOA formation from these reactions are absent in current global models. Thus, the formation of SOA has been incorporated in the global model, STOCHEM-CRI, a 3-D global chemistry transport model and the role of CI chemistry in controlling atmospheric composition and climate, and the influence of water vapor has been discussed in the study.

  1. Early Campanian coastal progradational systems and their coal-forming environments, Wyoming to New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marley, W.E.; Flores, R.M.; Ethridge, F.G.

    1985-05-01

    Ammonite zones (Baculites obtusus-Scaphites hippocrepis) in the marine facies associated with the Mesaverde Formation in the Bighorn basin, Wyoming, Star Point Sandstone and Blackhawk Formation in the Wasatch Plateau, Utah, and the Point Lookout Sandstone, Menefee Formation, and Crevasse Canyon Formation in the Gallup coalfield, New Mexico, indicate that these formations were deposited during early Campanian time (80-84 Ma). The coal-forming environments of these early Campanian formations were located landward of wave-reworked coastal sand complexes of progradational systems along the western margin of the Cretaceous seaway from Wyoming to New Mexico. The Mesaverde coals accumulated in swamps of the lowermore » delta plain and coeval interdeltaic strandplain environments. The Star Point-Blackhawk coals accumulated in swamps of the lower delta plains of laterally shifting, prograding deltas and associated barrier ridge plains. The Point Lookout, Menefee, and Crevasse canyon coals formed in swamps of the lower delta plain and infilled lagoons behind barrier islands. Although the common coal-forming environments of these progradational systems are back barrier and delta plain, the former setting was the more conducive for accumulation of thick, laterally extensive coals. Economic coal deposits formed in swamps built on abandoned back-barrier platforms that were free of detrital influx and marine influence. Delta-plain coals tend to be lenticular and laterally discontinuous and thus uneconomic. The early Campanian coal-forming coastal-plain environments are analogous to modern peat-forming environments along the coast of Belize, Central America. Deltaic sediments deposited along the Belize coast by short-headed streams are reworked by waves into coastal barrier systems.« less

  2. GNU Data Language (GDL) - a free and open-source implementation of IDL

    NASA Astrophysics Data System (ADS)

    Arabas, Sylwester; Schellens, Marc; Coulais, Alain; Gales, Joel; Messmer, Peter

    2010-05-01

    GNU Data Language (GDL) is developed with the aim of providing an open-source drop-in replacement for the ITTVIS's Interactive Data Language (IDL). It is free software developed by an international team of volunteers led by Marc Schellens - the project's founder (a list of contributors is available on the project's website). The development is hosted on SourceForge where GDL continuously ranks in the 99th percentile of most active projects. GDL with its library routines is designed as a tool for numerical data analysis and visualisation. As its proprietary counterparts (IDL and PV-WAVE), GDL is used particularly in geosciences and astronomy. GDL is dynamically-typed, vectorized and has object-oriented programming capabilities. The library routines handle numerical calculations, data visualisation, signal/image processing, interaction with host OS and data input/output. GDL supports several data formats such as netCDF, HDF4, HDF5, GRIB, PNG, TIFF, DICOM, etc. Graphical output is handled by X11, PostScript, SVG or z-buffer terminals, the last one allowing output to be saved in a variety of raster graphics formats. GDL is an incremental compiler with integrated debugging facilities. It is written in C++ using the ANTLR language-recognition framework. Most of the library routines are implemented as interfaces to open-source packages such as GNU Scientific Library, PLPlot, FFTW, ImageMagick, and others. GDL features a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). Extensions to GDL can be written in C++, GDL, and Python. A number of open software libraries written in IDL, such as the NASA Astronomy Library, MPFIT, CMSVLIB and TeXtoIDL are fully or partially functional under GDL. Packaged versions of GDL are available for several Linux distributions and Mac OS X. The source code compiles on some other UNIX systems, including BSD and OpenSolaris. The presentation will cover the current status of the project, the key accomplishments, and the weaknesses - areas where contributions and users' feedback are welcome! While still being in beta-stage of development, GDL proved to be a useful tool for classroom work on data analysis. Its usage for teaching meteorological-data processing at the University of Warsaw will serve as an example.

  3. Incorporating Brokers within Collaboration Environments

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; de Torcy, A.

    2013-12-01

    A collaboration environment, such as the integrated Rule Oriented Data System (iRODS - http://irods.diceresearch.org), provides interoperability mechanisms for accessing storage systems, authentication systems, messaging systems, information catalogs, networks, and policy engines from a wide variety of clients. The interoperability mechanisms function as brokers, translating actions requested by clients to the protocol required by a specific technology. The iRODS data grid is used to enable collaborative research within hydrology, seismology, earth science, climate, oceanography, plant biology, astronomy, physics, and genomics disciplines. Although each domain has unique resources, data formats, semantics, and protocols, the iRODS system provides a generic framework that is capable of managing collaborative research initiatives that span multiple disciplines. Each interoperability mechanism (broker) is linked to a name space that enables unified access across the heterogeneous systems. The collaboration environment provides not only support for brokers, but also support for virtualization of name spaces for users, files, collections, storage systems, metadata, and policies. The broker enables access to data or information in a remote system using the appropriate protocol, while the collaboration environment provides a uniform naming convention for accessing and manipulating each object. Within the NSF DataNet Federation Consortium project (http://www.datafed.org), three basic types of interoperability mechanisms have been identified and applied: 1) drivers for managing manipulation at the remote resource (such as data subsetting), 2) micro-services that execute the protocol required by the remote resource, and 3) policies for controlling the execution. For example, drivers have been written for manipulating NetCDF and HDF formatted files within THREDDS servers. Micro-services have been written that manage interactions with the CUAHSI data repository, the DataONE information catalog, and the GeoBrain broker. Policies have been written that manage transfer of messages between an iRODS message queue and the Advanced Message Queuing Protocol. Examples of these brokering mechanisms will be presented. The DFC collaboration environment serves as the intermediary between community resources and compute grids, enabling reproducible data-driven research. It is possible to create an analysis workflow that retrieves data subsets from a remote server, assemble the required input files, automate the execution of the workflow, automatically track the provenance of the workflow, and share the input files, workflow, and output files. A collaborator can re-execute a shared workflow, compare results, change input files, and re-execute an analysis.

  4. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.

  5. Using R for analysing spatio-temporal datasets: a satellite-based precipitation case study

    NASA Astrophysics Data System (ADS)

    Zambrano-Bigiarini, Mauricio

    2017-04-01

    Increasing computer power and the availability of remote-sensing data measuring different environmental variables has led to unprecedented opportunities for Earth sciences in recent decades. However, dealing with hundred or thousands of files, usually in different vectorial and raster formats and measured with different temporal frequencies, impose high computation challenges to take full advantage of all the available data. R is a language and environment for statistical computing and graphics which includes several functions for data manipulation, calculation and graphical display, which are particularly well suited for Earth sciences. In this work I describe how R was used to exhaustively evaluate seven state-of-the-art satellite-based rainfall estimates (SRE) products (TMPA 3B42v7, CHIRPSv2, CMORPH, PERSIANN-CDR, PERSIAN-CCS-adj, MSWEPv1.1 and PGFv3) over the complex topography and diverse climatic gradients of Chile. First, built-in functions were used to automatically download the satellite-images in different raster formats and spatial resolutions and to clip them into the Chilean spatial extent if necessary. Second, the raster package was used to read, plot, and conduct an exploratory data analysis in selected files of each SRE product, in order to detect unexpected problems (rotated spatial domains, order or variables in NetCDF files, etc). Third, raster was used along with the hydroTSM package to aggregate SRE files into different temporal scales (daily, monthly, seasonal, annual). Finally, the hydroTSM and hydroGOF packages were used to carry out a point-to-pixel comparison between precipitation time series measured at 366 stations and the corresponding grid cell of each SRE. The modified Kling-Gupta index of model performance was used to identify possible sources of systematic errors in each SRE, while five categorical indices (PC, POD, FAR, ETS, fBIAS) were used to assess the ability of each SRE to correctly identify different precipitation intensities. In the end, R proved to be and efficient environment to deal with thousands of raster, vectorial and time series files, with different spatial and temporal resolutions and spatial reference systems. In addition, the use of well-documented R scripts made code readable and re-usable, facilitating reproducible research which is essential to build trust in stakeholders and scientific community.

  6. Enhanced equivalence class formation by the delay and relational functions of meaningful stimuli.

    PubMed

    Arntzen, Erik; Nartey, Richard K; Fields, Lanny

    2015-05-01

    Undergraduates in six groups of 10 attempted to form three 3-node 5-member equivalence classes (A → B → C → D → E) under the simultaneous protocol. In five of six groups, all stimuli were abstract shapes; in the PIC group, C stimuli were pictures with the remainder being abstract shapes. Before class formation, participants in the Identity-S and Identity-D groups were given preliminary training to form identity conditional discriminations with the C stimuli using simultaneous and 6 s delayed matching-to-sample procedures, respectively. In the Arbitrary-S and Arbitrary-D groups, before class formation, arbitrary conditional discriminations were formed between C and X stimuli using simultaneous and 6 s delayed matching-to-sample procedures, respectively. With no preliminary training, classes in the PIC and ABS groups were formed by 80% and 0% of participants, respectively. After preliminary training, class formation (yield) increased with delay, regardless of relational type. For each of the two delays, yield was slightly greater after forming arbitrary- instead of identity-relations. Yield was greatest, however, when a class contained a meaningful stimulus (PIC). During failed class formation, probes produced experimenter-defined relations, participant-defined relations, and unsystematic responding; delay, but not the relation type in preliminary training influenced relational and indeterminate responding. These results suggest how meaningful stimuli enhance equivalence class formation. © Society for the Experimental Analysis of Behavior.

  7. Aquifer composition and the tendency toward scale-deposit formation during reverse osmosis desalination - Examples from saline ground water in New Mexico, USA

    USGS Publications Warehouse

    Huff, G.F.

    2006-01-01

    Desalination is expected to make a substantial contribution to water supply in the United States by 2020. Currently, reverse osmosis is one of the most cost effective and widely used desalination technologies. The tendency to form scale deposits during reverse osmosis is an important factor in determining the suitability of input waters for use in desalination. The tendency toward scale formation of samples of saline ground water from selected geologic units in New Mexico was assessed using simulated evaporation. All saline water samples showed a strong tendency to form CaCO3 scale deposits. Saline ground water samples from the Yeso Formation and the San Andres Limestone showed relatively stronger tendencies to form CaSO4 2H2O scale deposits and relatively weaker tendencies to form SiO2(a) scale deposits than saline ground water samples from the Rio Grande alluvium. Tendencies toward scale formation in saline ground water samples from the Dockum Group were highly variable. The tendencies toward scale formation of saline waters from the Yeso Formation, San Andres Limestone, and Rio Grande alluvium appear to correlate with the mineralogical composition of the geologic units, suggesting that scale-forming tendencies are governed by aquifer composition and water-rock interaction. ?? 2006 Elsevier B.V. All rights reserved.

  8. 78 FR 18457 - Definition of Form I-94 To Include Electronic Format

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-27

    ... parole stamp in a foreign passport to the list of documents designated as evidence of alien registration... will create a Form I-94 in an electronic format based on passenger, passport and visa information DHS... inspection. The officer stamps the Form I-94 and the alien's passport, detaches the bottom portion of the...

  9. Recovery and regeneration of spent MHD seed material by the formate process

    DOEpatents

    Sheth, A.C.; Holt, J.K.; Rasnake, D.G.; Solomon, R.L.; Wilson, G.L.; Herrigel, H.R.

    1991-10-15

    The specification discloses a spent seed recovery and regeneration process for an MHD power plant employing an alkali metal salt seed material such as potassium salt wherein the spent potassium seed in the form of potassium sulfate is collected from the flue gas and reacted with calcium hydroxide and carbon monoxide in an aqueous solution to cause the formation of calcium sulfate and potassium formate. The pH of the solution is adjusted to suppress formation of formic acid and to promote precipitation of any dissolved calcium salts. The solution containing potassium formate is then employed to provide the potassium salt in the form of potassium formate or, optionally, by heating the potassium formate under oxidizing conditions to convert the potassium formate to potassium carbonate. 5 figures.

  10. Recovery and regeneration of spent MHD seed material by the formate process

    DOEpatents

    Sheth, Atul C.; Holt, Jeffrey K.; Rasnake, Darryll G.; Solomon, Robert L.; Wilson, Gregory L.; Herrigel, Howard R.

    1991-01-01

    The specification discloses a spent seed recovery and regeneration process for an MHM power plant employing an alkali metal salt seed material such as potassium salt wherein the spent potassium seed in the form of potassium sulfate is collected from the flue gas and reacted with calcium hydroxide and carbon monoxide in an aqueous solution to cause the formation of calcium sulfate and potassium formate. The pH of the solution is adjusted to supress formation of formic acid and to promote precipitation of any dissolved calcium salts. The solution containing potassium formate is then employed to provide the potassium salt in the form of potassium formate or, optionally, by heating the potassium formate under oxidizing conditions to convert the potassium formate to potassium carbonate.

  11. Observsational Planet Formation

    NASA Astrophysics Data System (ADS)

    Dong, Ruobing; Zhu, Zhaohuan; Fung, Jeffrey

    2017-06-01

    Planets form in gaseous protoplanetary disks surrounding newborn stars. As such, the most direct way to learn how they form from observations, is to directly watch them forming in disks. In the past, this was very difficult due to a lack of observational capabilities; as such, planet formation was largely a subject of pure theoretical astrophysics. Now, thanks to a fleet of new instruments with unprecedented resolving power that have come online recently, we have just started to unveil features in resolve images of protoplanetary disks, such as gaps and spiral arms, that are most likely associated with embedded (unseen) planets. By comparing observations with theoretical models of planet-disk interactions, the masses and orbits of these still forming planets may be constrained. Such planets may help us to directly test various planet formation models. This marks the onset of a new field — observational planet formation. I will introduce the current status of this field.

  12. Time-resolved measurement of single pulse femtosecond laser-induced periodic surface structure formation induced by a pre-fabricated surface groove.

    PubMed

    Kafka, K R P; Austin, D R; Li, H; Yi, A Y; Cheng, J; Chowdhury, E A

    2015-07-27

    Time-resolved diffraction microscopy technique has been used to observe the formation of laser-induced periodic surface structures (LIPSS) from the interaction of a single femtosecond laser pulse (pump) with a nano-scale groove mechanically formed on a single-crystal Cu substrate. The interaction dynamics (0-1200 ps) was captured by diffracting a time-delayed, frequency-doubled pulse (probe) from nascent LIPSS formation induced by the pump with an infinity-conjugate microscopy setup. The LIPSS ripples are observed to form asynchronously, with the first one forming after 50 ps and others forming sequentially outward from the groove edge at larger time delays. A 1-D analytical model of electron heating including both the laser pulse and surface plasmon polariton excitation at the groove edge predicts ripple period, melt spot diameter, and qualitatively explains the asynchronous time-evolution of LIPSS formation.

  13. Datacube Interoperability, Encoding Independence, and Analytics

    NASA Astrophysics Data System (ADS)

    Baumann, Peter; Hirschorn, Eric; Maso, Joan

    2017-04-01

    Datacubes are commonly accepted as an enabling paradigm which provides a handy abstraction for accessing and analyzing the zillions of image files delivered by the manifold satellite instruments and climate simulations, among others. Additionally, datacubes are the classic model for statistical and OLAP datacubes, so a further information category can be integrated. From a standards perspective, spatio-temporal datacubes naturally are included in the concept of coverages which encompass regular and irregular grids, point clouds, and general meshes - or, more abstractly, digital representations of spatio-temporally varying phenomena. ISO 19123, which is identical to OGC Abstract Topic 6, gives a high-level abstract definition which is complemented by the OGC Coverage Implementation Schema (CIS) which is an interoperable, yet format independent concretization of the abstract model. Currently, ISO is working on adopting OGC CIS as ISO 19123-2; the existing ISO 19123 standard is under revision by one of the abstract authors and will become ISO 19123-1. The roadmap agreed by ISO further foresees adoption of the OGC Web Coverage Service (WCS) as an ISO standard so that a complete data and service model will exist. In 2016, INSPIRE has adopted WCS as Coverage Download Service, including the datacube analytics language Web Coverage Processing Service (WCPS). The rasdaman technology (www.rasdaman.org) is both OGC and INSPIRE Reference Implementation. In the global EarthServer initiative rasdaman database sizes are exceeding 250 TB today, heading for the Petabyte frontier well in 2017. Technically, CIS defines a compact, efficient model for representing multi-dimensional datacubes in several ways. The classical coverage cube defines a domain set (where are values?), a range set (what are these values?), and range type (what do the values mean?), as well as a "bag" for arbitrary metadata. With CIS 1.1, coordinate/value pair sequences have been added, as well as tiled representations. Further, CIS 1.1 offers a unified model for any kind of regular and irregular grids, also allowing sensor models as per SensorML. Encodings include ASCII formats like GML, JSON, RDF as well as binary formats like GeoTIFF, NetCDF, JPEG2000, and GRIB2; further, a container concept allows mixed representations within one coverage file utilizing zip or other convenient package formats. Through the tight integration with the Sensor Web Enablement (SWE), a lossless "transport" from sensor into coverage world is ensured. The corresponding service model of WCS supports datacube operations ranging from simple data extraction to complex ad-hoc analytics with WPCS. Notably, W3C is working has set out on a coverage model as well; it has been designed relatively independently from the abovementioned standards, but there is informal agreement to link it into the CIS universe (which allows for different, yet interchangeable representations). Particularly interesting in the W3C proposal is the detailed semantic modeling of metadata; as CIS 1.1 supports RDF, a tight coupling seems feasible.

  14. Enabling Interoperability and Servicing Multiple User Segments Through Web Services, Standards, and Data Tools

    NASA Astrophysics Data System (ADS)

    Palanisamy, Giriprakash; Wilson, Bruce E.; Cook, Robert B.; Lenhardt, Chris W.; Santhana Vannan, Suresh; Pan, Jerry; McMurry, Ben F.; Devarakonda, Ranjeet

    2010-12-01

    The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) is one of the science-oriented data centers in EOSDIS, aligned primarily with terrestrial ecology. The ORNL DAAC archives and serves data from NASA-funded field campaigns (such as BOREAS, FIFE, and LBA), regional and global data sets relevant to biogeochemical cycles, land validation studies for remote sensing, and source code for some terrestrial ecology models. Users of the ORNL DAAC include field ecologists, remote sensing scientists, modelers at various scales, synthesis scientific groups, a range of educational users (particularly baccalaureate and graduate instruction), and decision support analysts. It is clear that the wide range of users served by the ORNL DAAC have differing needs and differing capabilities for accessing and using data. It is also not possible for the ORNL DAAC, or the other data centers in EDSS to develop all of the tools and interfaces to support even most of the potential uses of data directly. As is typical of Information Technology to support a research enterprise, the user needs will continue to evolve rapidly over time and users themselves cannot predict future needs, as those needs depend on the results of current investigation. The ORNL DAAC is addressing these needs by targeted implementation of web services and tools which can be consumed by other applications, so that a modeler can retrieve data in netCDF format with the Climate Forecasting convention and a field ecologist can retrieve subsets of that same data in a comma separated value format, suitable for use in Excel or R. Tools such as our MODIS Subsetting capability, the Spatial Data Access Tool (SDAT; based on OGC web services), and OPeNDAP-compliant servers such as THREDDS particularly enable such diverse means of access. We also seek interoperability of metadata, recognizing that terrestrial ecology is a field where there are a very large number of relevant data repositories. ORNL DAAC metadata is published to several metadata repositories using the Open Archive Initiative Protocol for Metadata Handling (OAI-PMH), to increase the chances that users can find data holdings relevant to their particular scientific problem. ORNL also seeks to leverage technology across these various data projects and encourage standardization of processes and technical architecture. This standardization is behind current efforts involving the use of Drupal and Fedora Commons. This poster describes the current and planned approaches that the ORNL DAAC is taking to enable cost-effective interoperability among data centers, both across the NASA EOSDIS data centers and across the international spectrum of terrestrial ecology-related data centers. The poster will highlight the standards that we are currently using across data formats, metadata formats, and data protocols. References: [1]Devarakonda R., et al. Mercury: reusable metadata management, data discovery and access system. Earth Science Informatics (2010), 3(1): 87-94. [2]Devarakonda R., et al. Data sharing and retrieval using OAI-PMH. Earth Science Informatics (2011), 4(1): 1-5.

  15. Analysis of Extreme Star Formation Environments in the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Nayak, Omnarayani

    2018-01-01

    My thesis is on three extreme star forming environments in the Large Magellanic Cloud: 30 Doradus, N159, and N79. These three regions are at different evolutionary stage of forming stars. N79 is at a very young stage, just starting its star formation activity. N159 is currently actively forming several massive YSOs. And 30 Doradus has already passed it peak star formation, and several protostars are no longer shrouded by gas and dust, and are starting to be more visible in the optical wavelengths. I analyze the CO molecular gas clouds with ALMA in 30 Doradus, N159, and N79. I identify all massive YSOs within the ALMA footprint of all three regions. My thesis is on relating the star formation activity in 30 Doradus, N159, and N79 to the high density gas in which these protostars form. I find that not all massive young stellar objects are associated with CO gas, higher mass clumps tend to form higher mass stars, and lower mass clumps tend to not be gravitationally bound however the larger clouds are bound. I use ancillary SOFIA data and Magellan FIRE data to place constraints on the outflow rate from the massive protostars, constrain the temperature of the gas, determine the spectral type of the young stellar objects, and estimate the extinction. Looking at the interplay between dense molecular gas and the newly forming stars in a stellar nursery will shed light on how these stars formed: filamentary collision, monolithic collapse, or competitive accretion. The Large Magellanic Cloud has been the subject of star formation studies for decades due to its proximity to the Milky Way (50 kpc), a nearly face-on orientation, and a low metallicity (0.5 solar) similar to that of galaxies at the peak of star formation in the universe (z~2). Thus, my thesis probes the chemical and physical conditions necessary for massive star formation in an environment more typical of the peak of star formation in the universe.

  16. Formation of a new crystalline form of anhydrous β-maltose by ethanol-mediated crystal transformation.

    PubMed

    Verhoeven, Nicolas; Neoh, Tze Loon; Ohashi, Tetsuya; Furuta, Takeshi; Kurozumi, Sayaka; Yoshii, Hidefumi

    2012-04-01

    β-Maltose monohydrate was transformed into an anhydrous form by ethanol-mediated method under several temperatures with agitation. A new stable anhydrous form of β-maltose (Mβ(s)) was obtained, as substantiated by the X-ray diffraction patterns. Mβ(s) obtained by this method presented a fine porous structure, resulting in greater specific surface area compared to those of β-maltose monohydrate and anhydrous β-maltose obtained by vacuum drying (Mβ(h)). The crystal transformation presumably consisted of two steps: dehydration reaction from the hydrous to amorphous forms and crystal formation from the amorphous forms to the noble anhydrous form. The kinetics of these reactions were determined by thermal analysis using Jander's equation and Arrhenius plots. The overall activation energies of the dehydration reaction and the formation of anhydrous maltose were evaluated to be 100 and 90 kJ/mol, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. [Mechanisms of immune deposit formation in glomerulonephritis].

    PubMed

    Bussolati, B; Camussi, G

    1996-03-01

    Recent experimental studies allowed the identification of several mechanisms of immune deposit formation, which are able to reproduce the morphological and clinical pattern of human glomerulonephritis. Moreover, it was shown that most of the lesions considered, in the past, as due to circulating immune complexes (IC), are instead caused by the "in situ" formation of IC. As a result of these studies, the following schematic classification was proposed: 1) immune deposits formed by glomerular localization of IC primarily formed in the circulation; 2) immune deposits formed "in situ" by reaction of circulating antibodies with fixed structural antigens; 3) immune deposits formed "in situ" by antibodies reactive with movable structural antigens; 4) immune deposits formed "in situ" by antibodies reactive with sequestered antigens leaking out of tissues; 5) IC formed "in situ" by antibodies reactive with exogenous or non-glomerular endogenous antigens planted in the glomeruli; 6) ANCA-associated glomerular disease.

  18. Not all stars form in clusters - measuring the kinematics of OB associations with Gaia

    NASA Astrophysics Data System (ADS)

    Ward, Jacob L.; Kruijssen, J. M. Diederik

    2018-04-01

    It is often stated that star clusters are the fundamental units of star formation and that most (if not all) stars form in dense stellar clusters. In this monolithic formation scenario, low-density OB associations are formed from the expansion of gravitationally bound clusters following gas expulsion due to stellar feedback. N-body simulations of this process show that OB associations formed this way retain signs of expansion and elevated radial anisotropy over tens of Myr. However, recent theoretical and observational studies suggest that star formation is a hierarchical process, following the fractal nature of natal molecular clouds and allowing the formation of large-scale associations in situ. We distinguish between these two scenarios by characterizing the kinematics of OB associations using the Tycho-Gaia Astrometric Solution catalogue. To this end, we quantify four key kinematic diagnostics: the number ratio of stars with positive radial velocities to those with negative radial velocities, the median radial velocity, the median radial velocity normalized by the tangential velocity, and the radial anisotropy parameter. Each quantity presents a useful diagnostic of whether the association was more compact in the past. We compare these diagnostics to models representing random motion and the expanding products of monolithic cluster formation. None of these diagnostics show evidence of expansion, either from a single cluster or multiple clusters, and the observed kinematics are better represented by a random velocity distribution. This result favours the hierarchical star formation model in which a minority of stars forms in bound clusters and large-scale, hierarchically structured associations are formed in situ.

  19. Oxidation and formation of deposit precursors in hydrocarbon fuels

    NASA Technical Reports Server (NTRS)

    Mayo, F. R.; Lan, B.; Cotts, D. B.; Buttrill, S. E., Jr.; St.john, G. A.

    1983-01-01

    The oxidation of two jet turbine fuels and some pure hydrocarbons was studied at 130 C with and without the presence of small amounts of N-methyl pyrrole (NMP) or indene. Tendency to form solid-deposit precursors was studied by measuring soluble gum formation as well as dimer and trimer formation using field ionization mass spectrometry. Pure n-dodecane oxidized fastest and gave the smallest amount of procursors. An unstable fuel oil oxidized much slower but formed large amounts of precursors. Stable Jet A fuel oxidized slowest and gave little precursors. Indene either retarded or accelerated the oxidation of n-dodecane, depending on its concentration, but always caused more gum formation. The NMP greatly retarded n-dodecane oxidation but accelerated Jet A oxidation and greatly increased the latter's gum formation. In general, the additive reacted faster and formed most of the gum. Results are interpreted in terms of classical cooxidation theory. The effect of oxygen pressure on gum formation is also reported.

  20. Method for attenuating seismic shock from detonating explosive in an in situ oil shale retort

    DOEpatents

    Studebaker, Irving G.; Hefelfinger, Richard

    1980-01-01

    In situ oil shale retorts are formed in formation containing oil shale by excavating at least one void in each retort site. Explosive is placed in a remaining portion of unfragmented formation within each retort site adjacent such a void, and such explosive is detonated in a single round for explosively expanding formation within the retort site toward such a void for forming a fragmented permeable mass of formation particles containing oil shale in each retort. This produces a large explosion which generates seismic shock waves traveling outwardly from the blast site through the underground formation. Sensitive equipment which could be damaged by seismic shock traveling to it straight through unfragmented formation is shielded from such an explosion by placing such equipment in the shadow of a fragmented mass in an in situ retort formed prior to the explosion. The fragmented mass attenuates the velocity and magnitude of seismic shock waves traveling toward such sensitive equipment prior to the shock wave reaching the vicinity of such equipment.

  1. Effect of pores formation process and oxygen plasma treatment to hydroxyapatite formation on bioactive PEEK prepared by incorporation of precursor of apatite.

    PubMed

    Yabutsuka, Takeshi; Fukushima, Keito; Hiruta, Tomoko; Takai, Shigeomi; Yao, Takeshi

    2017-12-01

    When bioinert substrates with fine-sized pores are immersed in a simulated body fluid (SBF) and the pH value or the temperature is increased, fine particles of calcium phosphate, which the authors denoted as 'precursor of apatite' (PrA), are formed in the pores. By this method, hydroxyapatite formation ability can be provided to various kinds of bioinert materials. In this study, the authors studied fabrication methods of bioactive PEEK by using the above-mentioned process. First, the fine-sized pores were formed on the surface of the PEEK substrate by H 2 SO 4 treatment. Next, to provide hydrophilic property to the PEEK, the surfaces of the PEEK were treated with O 2 plasma. Finally, PrA were formed in the pores by the above-mentioned process, which is denoted as 'Alkaline SBF' treatment, and the bioactive PEEK was obtained. By immersing in SBF with the physiological condition, hydroxyapatite formation was induced on the whole surface of the substrate within 1day. The formation of PrA directly contributed to hydroxyapatite formation ability. By applying the O 2 plasma treatment, hydroxyapatite formation was uniformly performed on the whole surface of the substrate. The H 2 SO 4 treatment contributed to a considerable enhancement of adhesive strength of the formed hydroxyapatite layer formed in SBF because of the increase of surface areas of the substrate. As a comparative study, the sandblasting method was applied as the pores formation process instead of the H 2 SO 4 treatment. Although hydroxyapatite formation was provided also in this case, however, the adhesion of the formed hydroxyapatite layer to the substrate was not sufficient even if the O 2 plasma treatment was conducted. This result indicates that the fine-sized pores should be formed on the whole surface of the substrate uniformly to achieve high adhesive strength of the hydroxyapatite layer. Therefore, it is considered that the H 2 SO 4 treatment before the O 2 plasma and the 'Alkaline SBF' treatment is an important factor to achieve high adhesive strength of hydroxyapatite layer to the PEEK substrate. This material is expected to be a candidate for next-generation implant materials with high bioactivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Complex organic molecules and star formation

    NASA Astrophysics Data System (ADS)

    Bacmann, A.; Faure, A.

    2014-12-01

    Star forming regions are characterised by the presence of a wealth of chemical species. For the past two to three decades, ever more complex organic species have been detected in the hot cores of protostars. The evolution of these molecules in the course of the star forming process is still uncertain, but it is likely that they are partially incorporated into protoplanetary disks and then into planetesimals and the small bodies of planetary systems. The complex organic molecules seen in star forming regions are particularly interesting since they probably make up building blocks for prebiotic chemistry. Recently we showed that these species were also present in the cold gas in prestellar cores, which represent the very first stages of star formation. These detections question the models which were until now accepted to account for the presence of complex organic molecules in star forming regions. In this article, we shortly review our current understanding of complex organic molecule formation in the early stages of star formation, in hot and cold cores alike and present new results on the formation of their likely precursor radicals.

  3. The Influence of Prior Modes of Growth, Temperature, Medium, and Substrate Surface on Biofilm Formation by Antibiotic-Resistant Campylobacter jejuni.

    PubMed

    Teh, Amy Huei Teen; Lee, Sui Mae; Dykes, Gary A

    2016-12-01

    Campylobacter jejuni is one of the most common causes of bacterial gastrointestinal food-borne infection worldwide. It has been suggested that biofilm formation may play a role in survival of these bacteria in the environment. In this study, the influence of prior modes of growth (planktonic or sessile), temperatures (37 and 42 °C), and nutrient conditions (nutrient broth and Mueller-Hinton broth) on biofilm formation by eight C. jejuni strains with different antibiotic resistance profiles was examined. The ability of these strains to form biofilm on different abiotic surfaces (stainless steel, glass, and polystyrene) as well as factors potentially associated with biofilm formation (bacterial surface hydrophobicity, auto-aggregation, and initial attachment) was also determined. The results showed that cells grown as sessile culture generally have a greater ability to form biofilm (P < 0.05) compared to their planktonic counterparts. Biofilm was also greater (P < 0.05) in lower nutrient media, while growth at different temperatures affects biofilm formation in a strain-dependent manner. The strains were able to attach and form biofilms on different abiotic surfaces, but none of them demonstrated strong, complex, or structured biofilm formation. There were no clear trends between the bacterial surface hydrophobicity, auto-aggregation, attachment, and biofilm formation by the strains. This finding suggests that environmental factors did affect biofilm formation by C. jejuni, and they are more likely to persist in the environment in the form of mixed-species rather than monospecies biofilms.

  4. Effects of a meaningful, a discriminative, and a meaningless stimulus on equivalence class formation.

    PubMed

    Fields, Lanny; Arntzen, Erik; Nartey, Richard K; Eilifsen, Christoffer

    2012-03-01

    Thirty college students attempted to form three 3-node 5-member equivalence classes under the simultaneous protocol. After concurrent training of AB, BC, CD, and DE relations, all probes used to assess the emergence of symmetrical, transitive, and equivalence relations were presented for two test blocks. When the A-E stimuli were all abstract shapes, none of 10 participants formed classes. When the A, B, D, and E stimuli were abstract shapes and the C stimuli were meaningful pictures, 8 of 10 participants formed classes. This high yield may reflect the expansion of existing classes that consist of the associates of the meaningful stimuli, rather than the formation of the ABCDE classes, per se. When the A-E stimuli were abstract shapes and the C stimuli became S(D)s prior to class formation, 5 out of 10 participants formed classes. Thus, the discriminative functions served by the meaningful stimuli can account for some of the enhancement of class formation produced by the inclusion of a meaningful stimulus as a class member. A sorting task, which provided a secondary measure of class formation, indicated the formation of all three classes when the emergent relations probes indicated the same outcome. In contrast, the sorting test indicated "partial" class formation when the emergent relations test indicated no class formation. Finally, the effects of nodal distance on the relatedness of stimuli in the equivalence classes were not influenced by the functions served by the C stimuli in the equivalence classes.

  5. Effects of a Meaningful, a Discriminative, and a Meaningless Stimulus on Equivalence Class Formation

    PubMed Central

    Fields, Lanny; Arntzen, Erik; Nartey, Richard K; Eilifsen, Christoffer

    2012-01-01

    Thirty college students attempted to form three 3-node 5-member equivalence classes under the simultaneous protocol. After concurrent training of AB, BC, CD, and DE relations, all probes used to assess the emergence of symmetrical, transitive, and equivalence relations were presented for two test blocks. When the A–E stimuli were all abstract shapes, none of 10 participants formed classes. When the A, B, D, and E stimuli were abstract shapes and the C stimuli were meaningful pictures, 8 of 10 participants formed classes. This high yield may reflect the expansion of existing classes that consist of the associates of the meaningful stimuli, rather than the formation of the ABCDE classes, per se. When the A–E stimuli were abstract shapes and the C stimuli became SDs prior to class formation, 5 out of 10 participants formed classes. Thus, the discriminative functions served by the meaningful stimuli can account for some of the enhancement of class formation produced by the inclusion of a meaningful stimulus as a class member. A sorting task, which provided a secondary measure of class formation, indicated the formation of all three classes when the emergent relations probes indicated the same outcome. In contrast, the sorting test indicated “partial” class formation when the emergent relations test indicated no class formation. Finally, the effects of nodal distance on the relatedness of stimuli in the equivalence classes were not influenced by the functions served by the C stimuli in the equivalence classes. PMID:22389524

  6. The International Satellite Cloud Climatology Project H-Series climate data record product

    NASA Astrophysics Data System (ADS)

    Young, Alisa H.; Knapp, Kenneth R.; Inamdar, Anand; Hankins, William; Rossow, William B.

    2018-03-01

    This paper describes the new global long-term International Satellite Cloud Climatology Project (ISCCP) H-series climate data record (CDR). The H-series data contain a suite of level 2 and 3 products for monitoring the distribution and variation of cloud and surface properties to better understand the effects of clouds on climate, the radiation budget, and the global hydrologic cycle. This product is currently available for public use and is derived from both geostationary and polar-orbiting satellite imaging radiometers with common visible and infrared (IR) channels. The H-series data currently span July 1983 to December 2009 with plans for continued production to extend the record to the present with regular updates. The H-series data are the longest combined geostationary and polar orbiter satellite-based CDR of cloud properties. Access to the data is provided in network common data form (netCDF) and archived by NOAA's National Centers for Environmental Information (NCEI) under the satellite Climate Data Record Program (https://doi.org/10.7289/V5QZ281S). The basic characteristics, history, and evolution of the dataset are presented herein with particular emphasis on and discussion of product changes between the H-series and the widely used predecessor D-series product which also spans from July 1983 through December 2009. Key refinements included in the ISCCP H-series CDR are based on improved quality control measures, modified ancillary inputs, higher spatial resolution input and output products, calibration refinements, and updated documentation and metadata to bring the H-series product into compliance with existing standards for climate data records.

  7. The catalytic effect of L- and D-histidine on alanine and lysine peptide formation.

    PubMed

    Fitz, Daniel; Jakschitz, Thomas; Rode, Bernd M

    2008-12-01

    A starting phase of chemical evolution on our ancient Earth around 4 billion years ago was the formation of amino acids and their combination to peptides and proteins. The salt-induced peptide formation (SIPF) reaction has been shown to be appropriate for this condensation reaction under moderate and plausible primitive Earth conditions, forming short peptides from amino acids in aqueous solution containing sodium chloride and Cu(II) ions. In this paper we report results about the formation of dialanine and dilysine from their monomers in this reaction. The catalytic influence of l- and d-histidine dramatically increases dialanine yields when starting from lower alanine concentrations, but also dilysine formation is markedly boosted by these catalysts. Attention is paid to measurable preferences for one enantiomeric form of alanine and lysine in the SIPF reaction. Alanine, especially, shows stereospecific behaviour, mostly in favour of the l-form.

  8. Silicide formation process of Pt added Ni at low temperature: Control of NiSi2 formation

    NASA Astrophysics Data System (ADS)

    Ikarashi, Nobuyuki; Masuzaki, Koji

    2011-03-01

    Transmission electron microscopy (TEM) and ab initio calculations revealed that the Ni-Si reaction around 300 °C is significantly changed by adding Pt to Ni. TEM analysis clarified that NiSi2 was formed in a reaction between Ni thin film (˜1 nm) and Si substrate, while NiSi was formed when Pt was added to the Ni film. We also found that the Ni-adamantane structure, which acts as a precursor for NiSi2 formation around the reaction temperature, was formed in the former reaction but was significantly suppressed in the latter reaction. Theoretical calculations indicated that Pt addition increased stress at the Ni-adamantane structure/Si-substrate interface. The increase in interface stress caused by Pt addition should raise the interface energy to suppress the Ni-adamantane structure formation, leading to NiSi2 formation being suppressed.

  9. Control of cell fate by the formation of an architecturally complex bacterial community.

    PubMed

    Vlamakis, Hera; Aguilar, Claudio; Losick, Richard; Kolter, Roberto

    2008-04-01

    Bacteria form architecturally complex communities known as biofilms in which cells are held together by an extracellular matrix. Biofilms harbor multiple cell types, and it has been proposed that within biofilms individual cells follow different developmental pathways, resulting in heterogeneous populations. Here we demonstrate cellular differentiation within biofilms of the spore-forming bacterium Bacillus subtilis, and present evidence that formation of the biofilm governs differentiation. We show that motile, matrix-producing, and sporulating cells localize to distinct regions within the biofilm, and that the localization and percentage of each cell type is dynamic throughout development of the community. Importantly, mutants that do not produce extracellular matrix form unstructured biofilms that are deficient in sporulation. We propose that sporulation is a culminating feature of biofilm formation, and that spore formation is coupled to the formation of an architecturally complex community of cells.

  10. Control of cell fate by the formation of an architecturally complex bacterial community

    PubMed Central

    Vlamakis, Hera; Aguilar, Claudio; Losick, Richard; Kolter, Roberto

    2008-01-01

    Bacteria form architecturally complex communities known as biofilms in which cells are held together by an extracellular matrix. Biofilms harbor multiple cell types, and it has been proposed that within biofilms individual cells follow different developmental pathways, resulting in heterogeneous populations. Here we demonstrate cellular differentiation within biofilms of the spore-forming bacterium Bacillus subtilis, and present evidence that formation of the biofilm governs differentiation. We show that motile, matrix-producing, and sporulating cells localize to distinct regions within the biofilm, and that the localization and percentage of each cell type is dynamic throughout development of the community. Importantly, mutants that do not produce extracellular matrix form unstructured biofilms that are deficient in sporulation. We propose that sporulation is a culminating feature of biofilm formation, and that spore formation is coupled to the formation of an architecturally complex community of cells. PMID:18381896

  11. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  12. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  13. 37 CFR 1.824 - Form and format for nucleotide and/or amino acid sequence submissions in computer readable form.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... which the data were recorded on the computer readable form, the operating system used, a reference...) Operating System Compatibility: MS-DOS, MS-Windows, Unix or Macintosh; (3) Line Terminator: ASCII Carriage... in a self-extracting format that will decompress on one of the systems described in paragraph (b) of...

  14. 12 CFR Appendix H to Part 222 - Appendix H-Model Forms for Risk-Based Pricing and Credit Score Disclosure Exception Notices

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... appendix are models; their use is optional. 3. A person may change the forms by rearranging the format or... when rearranging the format of the model forms. a. Acceptable changes include, for example: i. Corrections or updates to telephone numbers, mailing addresses, or Web site addresses that may change over...

  15. 12 CFR 792.13 - Can I get the records in different forms or formats?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED... computer disk, if the record is readily reproducible by us in that form or format, but we will not provide...

  16. 12 CFR 792.13 - Can I get the records in different forms or formats?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED... computer disk, if the record is readily reproducible by us in that form or format, but we will not provide...

  17. 12 CFR 792.13 - Can I get the records in different forms or formats?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED... computer disk, if the record is readily reproducible by us in that form or format, but we will not provide...

  18. 12 CFR 792.13 - Can I get the records in different forms or formats?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED... computer disk, if the record is readily reproducible by us in that form or format, but we will not provide...

  19. 12 CFR 792.13 - Can I get the records in different forms or formats?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FREEDOM OF INFORMATION ACT AND PRIVACY ACT, AND BY SUBPOENA; SECURITY PROCEDURES FOR CLASSIFIED... computer disk, if the record is readily reproducible by us in that form or format, but we will not provide...

  20. High temperature methods for forming oxidizer fuel

    DOEpatents

    Bravo, Jose Luis [Houston, TX

    2011-01-11

    A method of treating a formation fluid includes providing formation fluid from a subsurface in situ heat treatment process. The formation fluid is separated to produce a liquid stream and a first gas stream. The first gas stream includes carbon dioxide, hydrogen sulfide, hydrocarbons, hydrogen or mixtures thereof. Molecular oxygen is separated from air to form a molecular oxygen stream comprising molecular oxygen. The first gas stream is combined with the molecular oxygen stream to form a combined stream comprising molecular oxygen and the first gas stream. The combined stream is provided to one or more downhole burners.

  1. Accretion of Planetesimals and the Formation of Rocky Planets

    NASA Astrophysics Data System (ADS)

    Chambers, John E.; O'Brien, David P.; Davis, Andrew M.

    2010-02-01

    Here we describe the formation of rocky planets and asteroids in the context of the planetesimal hypothesis. Small dust grains in protoplanetary disks readily stick together forming mm-to-cm-sized aggregates, many of which experience brief heating episodes causing melting. Growth to km-sized planetesimals might proceed via continued pairwise sticking, turbulent concentration, or gravitational instability of a thin particle layer. Gravitational interactions between planetesimals lead to rapid runaway and oligarchic growth forming lunar-to-Mars-sized protoplanets in 10^5 to 10^6 years. Giant impacts between protoplanets form Earth-mass planets in 10^7 to 10^8 years, and occasionally lead to the formation of large satellites. Protoplanets may migrate far from their formation locations due to tidal interactions with the surrounding disk. Radioactive decay and impact heating cause melting and differentiation of planetesimals and protoplanets, forming iron-rich cores and silicate mantles, and leading to some loss of volatiles. Dynamical perturbations from giant planets eject most planetesimals and protoplanets from regions near orbital resonances, leading to asteroid-belt formation. Some of this scattered material will collide with growing terrestrial planets, altering their composition as a result. Numerical simulations and radioisotope dating indicate that the terrestrial planets of the Solar System were essentially fully formed in 100-200 million years.

  2. An extremely young massive clump forming by gravitational collapse in a primordial galaxy.

    PubMed

    Zanella, A; Daddi, E; Le Floc'h, E; Bournaud, F; Gobat, R; Valentino, F; Strazzullo, V; Cibinel, A; Onodera, M; Perret, V; Renaud, F; Vignali, C

    2015-05-07

    When cosmic star formation history reaches a peak (at about redshift z ≈ 2), galaxies vigorously fed by cosmic reservoirs are dominated by gas and contain massive star-forming clumps, which are thought to form by violent gravitational instabilities in highly turbulent gas-rich disks. However, a clump formation event has not yet been observed, and it is debated whether clumps can survive energetic feedback from young stars, and afterwards migrate inwards to form galaxy bulges. Here we report the spatially resolved spectroscopy of a bright off-nuclear emission line region in a galaxy at z = 1.987. Although this region dominates star formation in the galaxy disk, its stellar continuum remains undetected in deep imaging, revealing an extremely young (less than ten million years old) massive clump, forming through the gravitational collapse of more than one billion solar masses of gas. Gas consumption in this young clump is more than tenfold faster than in the host galaxy, displaying high star-formation efficiency during this phase, in agreement with our hydrodynamic simulations. The frequency of older clumps with similar masses, coupled with our initial estimate of their formation rate (about 2.5 per billion years), supports long lifetimes (about 500 million years), favouring models in which clumps survive feedback and grow the bulges of present-day galaxies.

  3. Screening of biofilm formation by beneficial vaginal lactobacilli and influence of culture media components.

    PubMed

    Terraf, M C Leccese; Juárez Tomás, M S; Nader-Macías, M E F; Silva, C

    2012-12-01

    To assess the ability of vaginal lactobacilli to form biofilm under different culture conditions and to determine the relationship between their growth and the capability of biofilm formation by selected strains. Fifteen Lactobacillus strains from human vagina were tested for biofilm formation by crystal violet staining. Only Lactobacillus rhamnosus Centro de Referencia para Lactobacilos Culture Collection (CRL) 1332, Lact. reuteri CRL 1324 and Lact. delbrueckii CRL 1510 were able to grow and form biofilm in culture media without Tween 80. However, Lact. gasseri CRL 1263 (a non-biofilm-forming strain) did not grow in these media. Scanning electron microscopy showed that Lact. rhamnosus CRL 1332 and Lact. reuteri CRL 1324 formed a highly structured biofilm, but only Lact. reuteri CRL 1324 showed a high amount of extracellular material in medium without Tween. Biofilm formation was significantly influenced by the strain, culture medium, inoculum concentration, microbial growth and chemical nature of the support used for the assay. The results allow the selection of biofilm-forming vaginal Lactobacillus strains and the conditions and factors that affect this phenomenon. © 2012 The Society for Applied Microbiology.

  4. Meteorological conditions during the formation of ice on aircraft

    NASA Technical Reports Server (NTRS)

    Samuels, L T

    1932-01-01

    These are the results of a number of records recently secured from autographic meteorological instruments mounted on airplanes at times when ice formed. Ice is found to collect on an airplane only when the airplane is in some form of visible moisture, such as cloud, fog, mist, rain. etc., and the air temperature is within certain critical limits. Described here are the characteristics of clear ice and rime ice and the specific types of hazards they present to airplanes and lighter than air vehicles. The weather records are classified according to the two general types of formation (clear ice and rime) together with the respective temperatures, relative humidities, clouds, and elevations above ground at which formations occurred. This classification includes 108 cases where rime formed, 43 cases in which clear ice formed, and 4 cases when both rime and clear ice formed during the same flight. It is evident from the above figures that there was a preponderance of rime by the ratio of 2.5 to 1, while in only a few cases both types of ice formation occurred during the same flight.

  5. The Cording Phenotype of Mycobacterium tuberculosis Induces the Formation of Extracellular Traps in Human Macrophages.

    PubMed

    Kalsum, Sadaf; Braian, Clara; Koeken, Valerie A C M; Raffetseder, Johanna; Lindroth, Margaretha; van Crevel, Reinout; Lerm, Maria

    2017-01-01

    The causative agent of tuberculosis, Mycobacterium tuberculosis , shares several characteristics with organisms that produce biofilms during infections. One of these is the ability to form tight bundles also known as cords. However, little is known of the physiological relevance of the cording phenotype. In this study, we investigated whether cord-forming M. tuberculosis induce the formation of macrophage extracellular traps (METs) in human monocyte-derived macrophages. Macrophages have previously been shown to produce extracellular traps in response to various stimuli. We optimized bacterial culturing conditions that favored the formation of the cord-forming phenotype as verified by scanning electron microscopy. Microscopy analysis of METs formation during experimental infection of macrophages with M. tuberculosis revealed that cord-forming M. tuberculosis induced significantly more METs compared to the non-cording phenotype. Deletion of early secreted antigenic target-6 which is an important virulence factor of M. tuberculosis , abrogated the ability of the bacteria to induce METs. The release of extracellular DNA from host cells during infection may represent a defense mechanism against pathogens that are difficult to internalize, including cord-forming M. tuberculosis .

  6. Unfolding the laws of star formation: the density distribution of molecular clouds.

    PubMed

    Kainulainen, Jouni; Federrath, Christoph; Henning, Thomas

    2014-04-11

    The formation of stars shapes the structure and evolution of entire galaxies. The rate and efficiency of this process are affected substantially by the density structure of the individual molecular clouds in which stars form. The most fundamental measure of this structure is the probability density function of volume densities (ρ-PDF), which determines the star formation rates predicted with analytical models. This function has remained unconstrained by observations. We have developed an approach to quantify ρ-PDFs and establish their relation to star formation. The ρ-PDFs instigate a density threshold of star formation and allow us to quantify the star formation efficiency above it. The ρ-PDFs provide new constraints for star formation theories and correctly predict several key properties of the star-forming interstellar medium.

  7. Star formation across cosmic time and its influence on galactic dynamics

    NASA Astrophysics Data System (ADS)

    Freundlich, Jonathan

    2015-12-01

    Observations show that ten billion years ago, galaxies formed their stars at rates up to twenty times higher than now. As stars are formed from cold molecular gas, a high star formation rate means a significant gas supply, and galaxies near the peak epoch of star formation are indeed much more gas-rich than nearby galaxies. Is the decline of the star formation rate mostly driven by the diminishing cold gas reservoir, or are the star formation processes also qualitatively different earlier in the history of the Universe? Ten billion years ago, young galaxies were clumpy and prone to violent gravitational instabilities, which may have contributed to their high star formation rate. Stars indeed form within giant, gravitationally-bound molecular clouds. But the earliest phases of star formation are still poorly understood. Some scenarii suggest the importance of interstellar filamentary structures as a first step towards core and star formation. How would their filamentary geometry affect pre-stellar cores? Feedback mechanisms related to stellar evolution also play an important role in regulating star formation, for example through powerful stellar winds and supernovae explosions which expel some of the gas and can even disturb the dark matter distribution in which each galaxy is assumed to be embedded. This PhD work focuses on three perspectives: (i) star formation near the peak epoch of star formation as seen from observations at sub-galactic scales; (ii) the formation of pre-stellar cores within the filamentary structures of the interstellar medium; and (iii) the effect of feedback processes resulting from star formation and evolution on the dark matter distribution.

  8. 12 CFR Appendix H to Part 1022 - Appendix H-Model Forms for Risk-Based Pricing and Credit Score Disclosure Exception Notices

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... addresses that may change over time. ii. The addition of graphics or icons, such as the person's corporate... change the forms by rearranging the format or by making technical modifications to the language of the... required to conduct consumer testing when rearranging the format of the model forms. a. Acceptable changes...

  9. Chondrule-forming Shock Fronts in the Solar Nebula: A Possible Unified Scenario for Planet and Chondrite Formation

    NASA Astrophysics Data System (ADS)

    Boss, A. P.; Durisen, R. H.

    2005-03-01

    Chondrules are millimeter-sized spherules found throughout primitive chondritic meteorites. Flash heating by a shock front is the leading explanation of their formation. However, identifying a mechanism for creating shock fronts inside the solar nebula has been difficult. In a gaseous disk capable of forming Jupiter, the disk must have been marginally gravitationally unstable at and beyond Jupiter's orbit. We show that this instability can drive inward spiral shock fronts with shock speeds of up to ~10 km s-1 at asteroidal orbits, sufficient to account for chondrule formation. The mixing and transport of solids in such a disk, combined with the planet-forming tendencies of gravitational instabilities, results in a unified scenario linking chondrite production with gas giant planet formation.

  10. Sedimentary architecture and depositional environment of Kudat Formation, Sabah, Malaysia

    NASA Astrophysics Data System (ADS)

    Ghaheri, Samira; Suhaili, Mohd; Sapari, Nasiman; Momeni, Mohammadsadegh

    2017-12-01

    Kudat Formation originated from deep marine environment. Three lithofacies association of deep marine turbidity channel was discovered in three Members of the Kudat Formation in Kudat Peninsula, Sabah, Malaysia. Turbidite and deep marine architecture elements was described based on detailed sedimentological studies. Four architecture elements were identified based on each facies association and their lithology properties and character: inner external levee that was formed by turbidity flows spill out from their confinement of channel belt; Lobes sheet that was formed during downslope debris flows associated with levee; Channel fill which sediments deposited from high to low density currents with different value of sediment concentration; and overbank terrace which was formed by rapid suspension sedimentation. The depositional environment of Kudat Formation is shelf to deep marine fan.

  11. The Star Formation Rate Density of the Universe at z = 0.24 and 0.4 from Halpha

    NASA Astrophysics Data System (ADS)

    Pascual, S.

    2005-01-01

    Knowledge of both the global star formation history of the universe and the nature of individual star-forming galaxies at different look-back times is essential to our understanding of galaxy formation and evolution. Deep redshift surveys suggest star-formation activity increases by an order of magnitude from z = 0 to ~1. As a direct test of whether substantial evolution in star-formation activity has occurred, we need to measure the star formation rate (SFR) density and the properties of the corresponding star-forming galaxy populations at different redshifts, using similar techniques. The main goal of this work is to extend the Universidad Complutense de Madrid (UCM) survey of emission-line galaxies to higher redshifts. (continues)

  12. Method for laser drilling subterranean earth formations

    DOEpatents

    Shuck, Lowell Z.

    1976-08-31

    Laser drilling of subterranean earth formations is efficiently accomplished by directing a collimated laser beam into a bore hole in registry with the earth formation and transversely directing the laser beam into the earth formation with a suitable reflector. In accordance with the present invention, the bore hole is highly pressurized with a gas so that as the laser beam penetrates the earth formation the high pressure gas forces the fluids resulting from the drilling operation into fissures and pores surrounding the laser-drilled bore so as to inhibit deleterious occlusion of the laser beam. Also, the laser beam may be dynamically programmed with some time dependent wave form, e.g., pulsed, to thermally shock the earth formation for forming or enlarging fluid-receiving fissures in the bore.

  13. Planetesimal formation during protoplanetary disk buildup

    NASA Astrophysics Data System (ADS)

    Drążkowska, J.; Dullemond, C. P.

    2018-06-01

    Context. Models of dust coagulation and subsequent planetesimal formation are usually computed on the backdrop of an already fully formed protoplanetary disk model. At the same time, observational studies suggest that planetesimal formation should start early, possibly even before the protoplanetary disk is fully formed. Aims: In this paper we investigate under which conditions planetesimals already form during the disk buildup stage, in which gas and dust fall onto the disk from its parent molecular cloud. Methods: We couple our earlier planetesimal formation model at the water snow line to a simple model of disk formation and evolution. Results: We find that under most conditions planetesimals only form after the buildup stage, when the disk becomes less massive and less hot. However, there are parameters for which planetesimals already form during the disk buildup. This occurs when the viscosity driving the disk evolution is intermediate (αv 10-3-10-2) while the turbulent mixing of the dust is reduced compared to that (αt ≲ 10-4), and with the assumption that the water vapor is vertically well-mixed with the gas. Such a αt ≪ αv scenario could be expected for layered accretion, where the gas flow is mostly driven by the active surface layers, while the midplane layers, where most of the dust resides, are quiescent. Conclusions: In the standard picture where protoplanetary disk accretion is driven by global turbulence, we find that no planetesimals form during the disk buildup stage. Planetesimal formation during the buildup stage is only possible in scenarios in which pebbles reside in a quiescent midplane while the gas and water vapor are diffused at a higher rate.

  14. In vitro characterization of biofilms formed by Kingella kingae.

    PubMed

    Kaplan, J B; Sampathkumar, V; Bendaoud, M; Giannakakis, A K; Lally, E T; Balashova, N V

    2017-08-01

    The Gram-negative bacterium Kingella kingae is part of the normal oropharyngeal mucosal flora of children <4 years old. K. kingae can enter the submucosa and cause infections of the skeletal system in children, including septic arthritis and osteomyelitis. The organism is also associated with infective endocarditis in children and adults. Although biofilm formation has been coupled with pharyngeal colonization, osteoarticular infections, and infective endocarditis, no studies have investigated biofilm formation in K. kingae. In this study we measured biofilm formation by 79 K. kingae clinical isolates using a 96-well microtiter plate crystal violet binding assay. We found that 37 of 79 strains (47%) formed biofilms. All strains that formed biofilms produced corroding colonies on agar. Biofilm formation was inhibited by proteinase K and DNase I. DNase I also caused the detachment of pre-formed K. kingae biofilm colonies. A mutant strain carrying a deletion of the pilus gene cluster pilA1pilA2fimB did not produce corroding colonies on agar, autoaggregate in broth, or form biofilms. Biofilm forming strains have higher levels of pilA1 expression. The extracellular components of biofilms contained 490 μg cm -2 of protein, 0.68 μg cm -2 of DNA, and 0.4 μg cm -2 of total carbohydrates. We concluded that biofilm formation is common among K. kingae clinical isolates, and that biofilm formation is dependent on the production of proteinaceous pili and extracellular DNA. Biofilm development may have relevance to the colonization, transmission, and pathogenesis of this bacterium. Extracellular DNA production by K. kingae may facilitate horizontal gene transfer within the oral microbial community. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Star Formation Activity Beyond the Outer Arm. I. WISE -selected Candidate Star-forming Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izumi, Natsuko; Yasui, Chikako; Saito, Masao

    The outer Galaxy beyond the Outer Arm provides a good opportunity to study star formation in an environment significantly different from that in the solar neighborhood. However, star-forming regions in the outer Galaxy have never been comprehensively studied or cataloged because of the difficulties in detecting them at such large distances. We studied 33 known young star-forming regions associated with 13 molecular clouds at R {sub G} ≥ 13.5 kpc in the outer Galaxy with data from the Wide-field Infrared Survey Explorer ( WISE ) mid-infrared all-sky survey. From their color distribution, we developed a simple identification criterion of star-forming regions inmore » the outer Galaxy with the WISE color. We applied the criterion to all the WISE sources in the molecular clouds in the outer Galaxy at R {sub G} ≥ 13.5 kpc detected with the Five College Radio Astronomy Observatory (FCRAO) {sup 12}CO survey of the outer Galaxy, of which the survey region is 102.°49 ≤  l  ≤ 141.°54, −3.°03 ≤  b  ≤ 5.°41, and successfully identified 711 new candidate star-forming regions in 240 molecular clouds. The large number of samples enables us to perform the statistical study of star formation properties in the outer Galaxy for the first time. This study is crucial to investigate the fundamental star formation properties, including star formation rate, star formation efficiency, and initial mass function, in a primordial environment such as the early phase of the Galaxy formation.« less

  16. 17 CFR 232.201 - Temporary hardship exemption.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... filing, under cover of Form TH (§§ 239.65, 249.447, 269.10 and 274.404 of this chapter), in paper format... paper format exhibit and a Form TH (§§ 239.65, 249.447, 259.604, 269.10, and 274.404 of this chapter...

  17. The role of Proteus mirabilis cell wall features in biofilm formation.

    PubMed

    Czerwonka, Grzegorz; Guzy, Anna; Kałuża, Klaudia; Grosicka, Michalina; Dańczuk, Magdalena; Lechowicz, Łukasz; Gmiter, Dawid; Kowalczyk, Paweł; Kaca, Wiesław

    2016-11-01

    Biofilms formed by Proteus mirabilis strains are a serious medical problem, especially in the case of urinary tract infections. Early stages of biofilm formation, such as reversible and irreversible adhesion, are essential for bacteria to form biofilm and avoid eradication by antibiotic therapy. Adhesion to solid surfaces is a complex process where numerous factors play a role, where hydrophobic and electrostatic interactions with solid surface seem to be substantial. Cell surface hydrophobicity and electrokinetic potential of bacterial cells depend on their surface composition and structure, where lipopolysaccharide, in Gram-negative bacteria, is prevailing. Our studies focused on clinical and laboratory P. mirabilis strains, where laboratory strains have determined LPS structures. Adherence and biofilm formation tests revealed significant differences between strains adhered in early stages of biofilm formation. Amounts of formed biofilm were expressed by the absorption of crystal violet. Higher biofilm amounts were formed by the strains with more negative values of zeta potential. In contrast, high cell surface hydrophobicity correlated with low biofilm amount.

  18. Release and Formation of Oxidation-Related Aldehydes during Wine Oxidation.

    PubMed

    Bueno, Mónica; Carrascón, Vanesa; Ferreira, Vicente

    2016-01-27

    Twenty-four Spanish wines were subjected to five consecutive cycles of air saturation at 25 °C. Free and bound forms of carbonyls were measured in the initial samples and after each saturation. Nonoxidized commercial wines contain important and sensory relevant amounts of oxidation-related carbonyls under the form of odorless bound forms. Models relating the contents in total aldehydes to the wine chemical composition suggest that fermentation can be a major origin for Strecker aldehydes: methional, phenylacetaldehyde, isobutyraldehyde, 2-methylbutanal, and isovaleraldehyde. Bound forms are further cleaved, releasing free aldehydes during the first steps of wine oxidation, as a consequence of equilibrium shifts caused by the depletion of SO2. At low levels of free SO2, de novo formation and aldehyde degradation are both observed. The relative importance of these phenomena depends on both the aldehyde and the wine. Models relating aldehyde formation rates to wine chemical composition suggest that amino acids are in most cases the most important precursors for de novo formation.

  19. A Pro-active Real-time Forecasting and Decision Support System for Daily Management of Marine Works

    NASA Astrophysics Data System (ADS)

    Bollen, Mark; Leyssen, Gert; Smets, Steven; De Wachter, Tom

    2016-04-01

    Marine Works involving turbidity generating activities (eg. dredging, dredge spoil placement) can generate environmental stress in and around a project area in the form of sediment plumes causing light reduction and sedimentation. If these works are situated near sensitive habitats like sea-grass beds, coral reefs or sensitive human activities eg. aquaculture farms or water intakes, or if contaminants are present in the water soil environmental scrutiny is advised. Environmental Regulations can impose limitations to these activities in the form of turbidity thresholds, spill budgets, contaminant levels. Breaching environmental regulations can result in increased monitoring, adaptation of the works planning and production rates and ultimately in a (temporary) stop of activities all of which entail time and cost impacts for a contractor and/or client. Sediment plume behaviour is governed by the dredging process, soil properties and ambient conditions (currents, water depth) and can be modelled. Usually this is done during the preparatory EIA phase of a project, for estimation of environmental impact based on climatic scenarios. An operational forecasting tool is developed to adapt marine work schedules to the real-time circumstances and thus evade exceedance of critical threshold levels at sensitive areas. The forecasting system is based on a Python-based workflow manager with a MySQL database and a Django frontend web tool for user interaction and visualisation of the model results. The core consists of a numerical hydrodynamic model with sediment transport module (Mike21 from DHI). This model is driven by space and time varying wind fields and wave boundary conditions, and turbidity inputs (suspended sediment source terms) based on marine works production rates and soil properties. The resulting threshold analysis allows the operator to indicate potential impact at the sensitive areas and instigate an adaption of the marine work schedule if needed. In order to use this toolbox in real-time situations and facilitate forecasting of impacts of planned dredge works, the following operational online functionalities are implemented: • Automated fetch and preparation of the input data, including 7 day forecast wind and wave fields and real-time measurements, and user defined the turbidity inputs based on scheduled marine works. • Generate automated forecasts and running user configurable scenarios at the same time in parallel. • Export and convert the model results, time series and maps, into a standardized format (netcdf). • Automatic analysis and processing of model results, including the calculation of indicator turbidity values and the exceedance analysis of threshold levels at the different sensitive areas. Data assimilation with the real time on site turbidity measurements is implemented in this threshold analysis. • Pre-programmed generation of animated sediment plumes, specific charts and pdf reports to allow a rapid interpretation of the model results by the operators and facilitating decision making in the operational planning. The performed marine works, resulting from the marine work schedule proposed by the forecasting system, are evaluated by a threshold analysis on the validated turbidity measurements on the sensitive sites. This machine learning loop allows a check of the system in order to evaluate forecast and model uncertainties.

  20. Globular cluster formation with multiple stellar populations: self-enrichment in fractal massive molecular clouds

    NASA Astrophysics Data System (ADS)

    Bekki, Kenji

    2017-08-01

    Internal chemical abundance spreads are one of fundamental properties of globular clusters (GCs) in the Galaxy. In order to understand the origin of such abundance spreads, we numerically investigate GC formation from massive molecular clouds (MCs) with fractal structures using our new hydrodynamical simulations with star formation and feedback effects of core-collapse supernovae (SNe) and asymptotic giant branch (AGB) stars. We particularly investigate star formation from gas chemically contaminated by SNe and AGB stars ('self-enrichment') in forming GCs within MCs with different initial conditions and environments. The principal results are as follows. GCs with multiple generations of stars can be formed from merging of hierarchical star cluster complexes that are developed from high-density regions of fractal MCs. Feedback effects of SNe and AGB stars can control the formation efficiencies of stars formed from original gas of MCs and from gas ejected from AGB stars. The simulated GCs have strong radial gradients of helium abundances within the central 3 pc. The original MC masses need to be as large as 107 M⊙ for a canonical initial stellar mass function (IMF) so that the final masses of stars formed from AGB ejecta can be ˜105 M⊙. Since star formation from AGB ejecta is rather prolonged (˜108 yr), their formation can be strongly suppressed by SNe of the stars themselves. This result implies that the so-called mass budget problem is much more severe than ever thought in the self-enrichment scenario of GC formation and thus that IMF for the second generation of stars should be 'top-light'.

  1. WAVE2- and microtubule-dependent formation of long protrusions and invasion of cancer cells cultured on three-dimensional extracellular matrices.

    PubMed

    Kikuchi, Keiji; Takahashi, Kazuhide

    2008-11-01

    Invadopodia, small protrusions formed at ventral membranes of several types of invasive cancer cells upon contact with the extracellular matrix (ECM), are implicated in cell invasion; however, the relationship between invadopodia formation and cell invasion through the ECM is still unknown. To correlate the formation of membrane protrusions and cell invasion, a three-dimensional (3-D) gel culture system with native collagen type-I matrix overlaid with a thin basement membrane equivalent (Matrigel) was made. Human breast cancer cell line MDA-MB-231 formed long protrusions in addition to small protrusions reminiscent of invadopodia and migrated into the collagen layer. Comparative analyses with other cancer cell lines indicate that cellular ability to form long protrusions, but not small protrusions or invadopodia, correlates with cellular invasiveness in the 3-D culture. Some of the long protrusions in MDA-MB-231 cells appeared to extend from the adherence membrane, implying that they are derived from small protrusions. The formation of long protrusions and invasion, as well as the formation of invadopodia, required WAVE2 in MDA-MB-231 cells. Accumulation of tubulin was observed in long protrusions but not in invadopodia. Correspondingly, a microtubule-stabilizing agent, paclitaxel, suppressed the formation of long protrusions and invasion, but not the formation of invadopodia, in MDA-MB-231 cells. These results suggest that long protrusions formed in a WAVE2- and microtubule-dependent manner may identify the cells at the later stage of invasion, possibly after the formation of invadopodia in the 3-D cultures.

  2. The origin of discrete multiple stellar populations in globular clusters

    NASA Astrophysics Data System (ADS)

    Bekki, K.; Jeřábková, T.; Kroupa, P.

    2017-10-01

    Recent observations have revealed that at least several old globular clusters (GCs) in the Galaxy have discrete distributions of stars along the Mg-Al anticorrelation. In order to discuss this recent observation, we construct a new one-zone GC formation model in which the maximum stellar mass (mmax) in the initial mass function of stars in a forming GC depends on the star formation rate, as deduced from independent observations. We investigate the star formation histories of forming GCs. The principal results are as follows. About 30 Myr after the formation of the first generation (1G) of stars within a particular GC, new stars can be formed from ejecta from asymptotic giant branch (AGB) stars of 1G. However, the formation of this second generation (2G) of stars can last only for [10-20] Myr because the most massive SNe of 2G expel all of the remaining gas. The third generation (3G) of stars are then formed from AGB ejecta ≈30 Myr after the truncation of 2G star formation. This cycle of star formation followed by its truncation by SNe can continue until all AGB ejecta is removed from the GC by some physical process. Thus, it is inevitable that GCs have discrete multiple stellar populations in the [Mg/Fe]-[Al/Fe] diagram. Our model predicts that low-mass GCs are unlikely to have discrete multiple stellar populations, and young massive clusters may not have massive OB stars owing to low mmax (<[20-30] M⊙) during the secondary star formation.

  3. Mesoarchean black shale -iron sedimentary sequences in Cleaverville Formation, Pilbara Australia: drilling preliminary result of DXCL2

    NASA Astrophysics Data System (ADS)

    Kiyokawa, S.; Ito, T.; Ikehara, M.; Yamaguchi, K. E.; Onoue, T.; Horie, K.; Sakamoto, R.; Teraji, S.; Aihara, Y.

    2012-12-01

    The 3.2-3.1 Ga Dixon island-Cleaverville formations are well-preserved hydrothermal oceanic sequence at oceanic island arc setting (Kiyokawa et al., 2002, 2006, 2012). The Dixon Island (3195+15 Ma) - Cleaverville (3108+13 Ma) formations formed volcano-sedimentary sequences with hydrothermal chert, black shale and banded iron formation to the top. Based on the scientific drilling as DXCL1 at 2007 and DXCL2 at 2011, lithology was clearly understood. Four drilling holes had been done at coastal sites; the Dixon Island Formation is DX site (100m) and the Cleaverville Formation is CL2 (40m), CL1 (60m) and CL3 (200m) sites and from stratigraphic bottom to top. These sequences formed coarsening and thickening upward black shale-BIF sequences. The Dixon Island Formation consists komatiite-rhyolite sequences with many hydrothermal veins and very fine laminated cherty rocks above them. The Cleaverville Formation contains black shale, fragments-bearing pyroclastic beds, white chert, greenish shale and BIF. Especially, CL3 core, which drilled through the Iron formation, shows siderite-chert beds above black shale identified before magnetite lamination bed. The magnetite bed formed very thin laminated bed with siderite lamination. This magnetite bed was covered by black shale beds again. New U-Pb SHRIMP data of the pyroclastic in black shale is 3109Ma. Estimated 2-8 cm/1000year sedimentation rate are identified in these sequences. Our preliminary result show that siderite and chert layers formed before magnetite iron sedimentation. The lower-upper sequence of organic carbon rich black shales are similar amount of organic content and 13C isotope (around -30per mill). So we investigate that the Archean iron formation, especially Cleaverville iron formation, was highly related by hydrothermal input and started pre-syn iron sedimentation at anoxic oceanic condition.

  4. Fluid outlet at the bottom of an in situ oil shale retort

    DOEpatents

    Hutchins, Ned M.

    1984-01-01

    Formation is excavated from within the boundaries of a retort site in formation containing oil shale for forming at least one retort level void extending horizontally across the retort site, leaving at least one remaining zone of unfragmented formation within the retort site. A production level drift is excavated below the retort level void, leaving a lower zone of unfragmented formation between the retort level void and the production level drift. A plurality of raises are formed between the production level drift and the retort level void for providing product withdrawal passages distributed generally uniformly across the horizontal cross section of the retort level void. The product withdrawal passages are backfilled with a permeable mass of particles. Explosive placed within the remaining zone of unfragmented formation above the retort level void is detonated for explosively expanding formation within the retort site toward at least the retort level void for forming a fragmented permeable mass of formation particles containing oil shale within the boundaries of the retort site. During retorting operations products of retorting are conducted from the fragmented mass in the retort through the product withdrawal passages to the production level void. The products are withdrawn from the production level void.

  5. The rate and latency of star formation in dense, massive clumps in the Milky Way

    NASA Astrophysics Data System (ADS)

    Heyer, M.; Gutermuth, R.; Urquhart, J. S.; Csengeri, T.; Wienen, M.; Leurini, S.; Menten, K.; Wyrowski, F.

    2016-04-01

    Context. Newborn stars form within the localized, high density regions of molecular clouds. The sequence and rate at which stars form in dense clumps and the dependence on local and global environments are key factors in developing descriptions of stellar production in galaxies. Aims: We seek to observationally constrain the rate and latency of star formation in dense massive clumps that are distributed throughout the Galaxy and to compare these results to proposed prescriptions for stellar production. Methods: A sample of 24 μm-based Class I protostars are linked to dust clumps that are embedded within molecular clouds selected from the APEX Telescope Large Area Survey of the Galaxy. We determine the fraction of star-forming clumps, f∗, that imposes a constraint on the latency of star formation in units of a clump's lifetime. Protostellar masses are estimated from models of circumstellar environments of young stellar objects from which star formation rates are derived. Physical properties of the clumps are calculated from 870 μm dust continuum emission and NH3 line emission. Results: Linear correlations are identified between the star formation rate surface density, ΣSFR, and the quantities ΣH2/τff and ΣH2/τcross, suggesting that star formation is regulated at the local scales of molecular clouds. The measured fraction of star forming clumps is 23%. Accounting for star formation within clumps that are excluded from our sample due to 24 μm saturation, this fraction can be as high as 31%, which is similar to previous results. Dense, massive clumps form primarily low mass (<1-2 M⊙) stars with emergent 24 μm fluxes below our sensitivity limit or are incapable of forming any stars for the initial 70% of their lifetimes. The low fraction of star forming clumps in the Galactic center relative to those located in the disk of the Milky Way is verified. Full Tables 2-4 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A29

  6. RTopo-2: A global high-resolution dataset of ice sheet topography, ice shelf cavity geometry and ocean bathymetry

    NASA Astrophysics Data System (ADS)

    Timmermann, Ralph; Schaffer, Janin

    2016-04-01

    The RTopo-1 data set of Antarctic ice sheet/shelf geometry and global ocean bathymetry has proven useful not only for modelling studies of ice-ocean interaction in the southern hemisphere. Following the spirit of this data set, we introduce a new product (RTopo-2) that contains consistent maps of global ocean bathymetry, upper and lower ice surface topographies for Greenland and Antarctica, and global surface height on a spherical grid with now 30 arc seconds resolution. We used the General Bathymetric Chart of the Oceans (GEBCO_2014) as the backbone and added the International Bathymetric Chart of the Arctic Ocean version 3 (IBCAOv3) and the International Bathymetric Chart of the Southern Ocean (IBCSO) version 1. To achieve a good representation of the fjord and shelf bathymetry around the Greenland continent, we corrected data from earlier gridded products in the areas of Petermann Glacier, Hagen Bræ and Helheim Glacier assuming that sub-ice and fjord bathymetries roughly follow plausible Last Glacial Maximum ice flow patterns. For the continental shelf off northeast Greenland and the floating ice tongue of Nioghalvfjerdsfjorden Glacier at about 79°N, we incorporated a high-resolution digital bathymetry model including all available multibeam survey data for the region. Radar data for ice surface and ice base topographies of the floating ice tongues of Nioghalvfjerdsfjorden Glacier and Zachariæ Isstrøm have been obtained from the data centers of Technical University of Denmark (DTU), Operation Icebridge (NASA/NSF) and Alfred Wegener Institute (AWI). For the Antarctic ice sheet/ice shelves, RTopo-2 largely relies on the Bedmap-2 product but applies corrections for the geometry of Getz, Abbot and Fimbul ice shelf cavities. The data set is available in full and in regional subsets in NetCDF format from the PANGAEA database.

  7. Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    Lu, B.; Piasecki, M.

    2008-12-01

    This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.

  8. EARLINET: potential operationality of a research network

    NASA Astrophysics Data System (ADS)

    Sicard, M.; D'Amico, G.; Comerón, A.; Mona, L.; Alados-Arboledas, L.; Amodeo, A.; Baars, H.; Baldasano, J. M.; Belegante, L.; Binietoglou, I.; Bravo-Aranda, J. A.; Fernández, A. J.; Fréville, P.; García-Vizcaíno, D.; Giunta, A.; Granados-Muñoz, M. J.; Guerrero-Rascado, J. L.; Hadjimitsis, D.; Haefele, A.; Hervo, M.; Iarlori, M.; Kokkalis, P.; Lange, D.; Mamouri, R. E.; Mattis, I.; Molero, F.; Montoux, N.; Muñoz, A.; Muñoz Porcar, C.; Navas-Guzmán, F.; Nicolae, D.; Nisantzi, A.; Papagiannopoulos, N.; Papayannis, A.; Pereira, S.; Preißler, J.; Pujadas, M.; Rizi, V.; Rocadenbosch, F.; Sellegri, K.; Simeonov, V.; Tsaknakis, G.; Wagner, F.; Pappalardo, G.

    2015-11-01

    In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June-17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) - the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products - was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.

  9. Pathfinder Sea Surface Temperature Climate Data Record

    NASA Astrophysics Data System (ADS)

    Baker-Yeboah, S.; Saha, K.; Zhang, D.; Casey, K. S.

    2016-02-01

    Global sea surface temperature (SST) fields are important in understanding ocean and climate variability. The NOAA National Centers for Environmental Information (NCEI) develops and maintains a high resolution, long-term, climate data record (CDR) of global satellite SST. These SST values are generated at approximately 4 km resolution using Advanced Very High Resolution Radiometer (AVHRR) instruments aboard NOAA polar-orbiting satellites going back to 1981. The Pathfinder SST algorithm is based on the Non-Linear SST algorithm using the modernized NASA SeaWiFS Data Analysis System (SeaDAS). Coefficients for this SST product were generated using regression analyses with co-located in situ and satellite measurements. Previous versions of Pathfinder included level 3 collated (L3C) products. Pathfinder Version 5.3 includes level 2 pre-processed (L2P), level 3 Uncollated (L3C), and L3C products. Notably, the data were processed in the cloud using Amazon Web Services and are made available through all of the modern web visualization and subset services provided by the THREDDS Data Server, the Live Access Server, and the OPeNDAP Hyrax Server.In this version of Pathfinder SST, anomalous hot-spots at land-water boundaries are better identified and the dataset includes updated land masks and sea ice data over the Antarctic ice shelves. All quality levels of SST values are generated, giving the user greater flexibility and the option to apply their own cloud-masking procedures. Additional improvements include consistent cloud tree tests for NOAA-07 and NOAA-19 with respect to the other sensors, improved SSTs in sun glint areas, and netCDF file format improvements to ensure consistency with the latest Group for High Resolution SST (GHRSST) requirements. This quality controlled satellite SST field is a reference environmental data record utilized as a primary resource of SST for numerous regional and global marine efforts.

  10. OOSTethys - Open Source Software for the Global Earth Observing Systems of Systems

    NASA Astrophysics Data System (ADS)

    Bridger, E.; Bermudez, L. E.; Maskey, M.; Rueda, C.; Babin, B. L.; Blair, R.

    2009-12-01

    An open source software project is much more than just picking the right license, hosting modular code and providing effective documentation. Success in advancing in an open collaborative way requires that the process match the expected code functionality to the developer's personal expertise and organizational needs as well as having an enthusiastic and responsive core lead group. We will present the lessons learned fromOOSTethys , which is a community of software developers and marine scientists who develop open source tools, in multiple languages, to integrate ocean observing systems into an Integrated Ocean Observing System (IOOS). OOSTethys' goal is to dramatically reduce the time it takes to install, adopt and update standards-compliant web services. OOSTethys has developed servers, clients and a registry. Open source PERL, PYTHON, JAVA and ASP tool kits and reference implementations are helping the marine community publish near real-time observation data in interoperable standard formats. In some cases publishing an OpenGeospatial Consortium (OGC), Sensor Observation Service (SOS) from NetCDF files or a database or even CSV text files could take only minutes depending on the skills of the developer. OOSTethys is also developing an OGC standard registry, Catalog Service for Web (CSW). This open source CSW registry was implemented to easily register and discover SOSs using ISO 19139 service metadata. A web interface layer over the CSW registry simplifies the registration process by harvesting metadata describing the observations and sensors from the “GetCapabilities” response of SOS. OPENIOOS is the web client, developed in PERL to visualize the sensors in the SOS services. While the number of OOSTethys software developers is small, currently about 10 around the world, the number of OOSTethys toolkit implementers is larger and growing and the ease of use has played a large role in spreading the use of interoperable standards compliant web services widely in the marine community.

  11. C-GLORSv5: an improved multipurpose global ocean eddy-permitting physical reanalysis

    NASA Astrophysics Data System (ADS)

    Storto, Andrea; Masina, Simona

    2016-11-01

    Global ocean reanalyses combine in situ and satellite ocean observations with a general circulation ocean model to estimate the time-evolving state of the ocean, and they represent a valuable tool for a variety of applications, ranging from climate monitoring and process studies to downstream applications, initialization of long-range forecasts and regional studies. The purpose of this paper is to document the recent upgrade of C-GLORS (version 5), the latest ocean reanalysis produced at the Centro Euro-Mediterraneo per i Cambiamenti Climatici (CMCC) that covers the meteorological satellite era (1980-present) and it is being updated in delayed time mode. The reanalysis is run at eddy-permitting resolution (1/4° horizontal resolution and 50 vertical levels) and consists of a three-dimensional variational data assimilation system, a surface nudging and a bias correction scheme. With respect to the previous version (v4), C-GLORSv5 contains a number of improvements. In particular, background- and observation-error covariances have been retuned, allowing a flow-dependent inflation in the globally averaged background-error variance. An additional constraint on the Arctic sea-ice thickness was introduced, leading to a realistic ice volume evolution. Finally, the bias correction scheme and the initialization strategy were retuned. Results document that the new reanalysis outperforms the previous version in many aspects, especially in representing the variability of global heat content and associated steric sea level in the last decade, the top 80 m ocean temperature biases and root mean square errors, and the Atlantic Ocean meridional overturning circulation; slight worsening in the high-latitude salinity and deep ocean temperature emerge though, providing the motivation for further tuning of the reanalysis system. The dataset is available in NetCDF format at doi:10.1594/PANGAEA.857995.

  12. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.

    2015-01-01

    The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).

  13. DasPy – Open Source Multivariate Land Data Assimilation Framework with High Performance Computing

    NASA Astrophysics Data System (ADS)

    Han, Xujun; Li, Xin; Montzka, Carsten; Kollet, Stefan; Vereecken, Harry; Hendricks Franssen, Harrie-Jan

    2015-04-01

    Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. Multivariate data assimilation refers to the simultaneous assimilation of observation data for multiple model state variables into a simulation model. Our main motivation was to develop an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with C++ and Fortran language. This system has been evaluated in several soil moisture, L-band brightness temperature and land surface temperature assimilation studies. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be represented by perturbed atmospheric forcings, perturbed soil and vegetation properties and model initial conditions. The CLM4.5 (Community Land Model) was integrated as the model operator. The CMEM (Community Microwave Emission Modelling Platform), COSMIC (COsmic-ray Soil Moisture Interaction Code) and the two source formulation were integrated as observation operators for assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy is parallelized using the hybrid MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) techniques. All the input and output data flow is organized efficiently using the commonly used NetCDF file format. Online 1D and 2D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.

  14. Building the Petascale National Environmental Research Interoperability Data Platform (NERDIP): Minimizing the 'Trough of Disillusionment' and Accelerating Pathways to the 'Plateau of Productivity'

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.

    2015-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) has evolved to become Australia's peak computing centre for national computational and Data-intensive Earth system science. More recently NCI collocated 10 Petabytes of 34 major national and international environmental, climate, earth system, geophysics and astronomy data collections to create the National Environmental Research Interoperability Data Platform (NERDIP). Spatial scales of the collections range from global to local ultra-high resolution, whilst sizes range from 3PB down to a few GB. The data is highly connected to both NCI HPC and cloud resources via low latency internal networks with massive bandwidth. Now that the collections are collocated on a single data platform, the 'Hype' and expectations around potential use cases for the NERDIP are high. Not unexpected issues are emerging such as access, licensing issues, ownership, and incompatible data standards. Many communities are standardised within their domain, but achieving true interdisciplinary science will require all communities to move towards open interoperable data formats such as NetCDF4/HDF5. This transition will impact on software using proprietary or non-open standards. But before we reach the 'Plateau of Productivity', there needs to be greater 'Enlightenment' of users to encourage them to realise that this unprecedented Earth system science platform provides a rich mine of opportunities for discovery and innovation for a diverse range of both domain-specific and interdisciplinary investigations including climate and weather research, impact analysis, environment, remote sensing and geophysics and develop new and innovative interdisciplinary use cases that will guide those architecting the system and help minimise the amplitude of the 'Trough of Disillusionment' and ensure greater productivity and uptake of the collections that make NERDIP unique in the next generation of Data-intensive Science.

  15. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.

  16. Galaxy Zoo: star formation versus spiral arm number

    NASA Astrophysics Data System (ADS)

    Hart, Ross E.; Bamford, Steven P.; Casteels, Kevin R. V.; Kruk, Sandor J.; Lintott, Chris J.; Masters, Karen L.

    2017-06-01

    Spiral arms are common features in low-redshift disc galaxies, and are prominent sites of star formation and dust obscuration. However, spiral structure can take many forms: from galaxies displaying two strong 'grand design' arms to those with many 'flocculent' arms. We investigate how these different arm types are related to a galaxy's star formation and gas properties by making use of visual spiral arm number measurements from Galaxy Zoo 2. We combine ultraviolet and mid-infrared (MIR) photometry from GALEX and WISE to measure the rates and relative fractions of obscured and unobscured star formation in a sample of low-redshift SDSS spirals. Total star formation rate has little dependence on spiral arm multiplicity, but two-armed spirals convert their gas to stars more efficiently. We find significant differences in the fraction of obscured star formation: an additional ˜10 per cent of star formation in two-armed galaxies is identified via MIR dust emission, compared to that in many-armed galaxies. The latter are also significantly offset below the IRX-β relation for low-redshift star-forming galaxies. We present several explanations for these differences versus arm number: variations in the spatial distribution, sizes or clearing time-scales of star-forming regions (I.e. molecular clouds), or contrasting recent star formation histories.

  17. Structure and decomposition of the silver formate Ag(HCO{sub 2})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.

    Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less

  18. Dynamic tubulation of mitochondria drives mitochondrial network formation.

    PubMed

    Wang, Chong; Du, Wanqing; Su, Qian Peter; Zhu, Mingli; Feng, Peiyuan; Li, Ying; Zhou, Yichen; Mi, Na; Zhu, Yueyao; Jiang, Dong; Zhang, Senyan; Zhang, Zerui; Sun, Yujie; Yu, Li

    2015-10-01

    Mitochondria form networks. Formation of mitochondrial networks is important for maintaining mitochondrial DNA integrity and interchanging mitochondrial material, whereas disruption of the mitochondrial network affects mitochondrial functions. According to the current view, mitochondrial networks are formed by fusion of individual mitochondria. Here, we report a new mechanism for formation of mitochondrial networks through KIF5B-mediated dynamic tubulation of mitochondria. We found that KIF5B pulls thin, highly dynamic tubules out of mitochondria. Fusion of these dynamic tubules, which is mediated by mitofusins, gives rise to the mitochondrial network. We further demonstrated that dynamic tubulation and fusion is sufficient for mitochondrial network formation, by reconstituting mitochondrial networks in vitro using purified fusion-competent mitochondria, recombinant KIF5B, and polymerized microtubules. Interestingly, KIF5B only controls network formation in the peripheral zone of the cell, indicating that the mitochondrial network is divided into subzones, which may be constructed by different mechanisms. Our data not only uncover an essential mechanism for mitochondrial network formation, but also reveal that different parts of the mitochondrial network are formed by different mechanisms.

  19. Highly efficient star formation in NGC 5253 possibly from stream-fed accretion.

    PubMed

    Turner, J L; Beck, S C; Benford, D J; Consiglio, S M; Ho, P T P; Kovács, A; Meier, D S; Zhao, J-H

    2015-03-19

    Gas clouds in present-day galaxies are inefficient at forming stars. Low star-formation efficiency is a critical parameter in galaxy evolution: it is why stars are still forming nearly 14 billion years after the Big Bang and why star clusters generally do not survive their births, instead dispersing to form galactic disks or bulges. Yet the existence of ancient massive bound star clusters (globular clusters) in the Milky Way suggests that efficiencies were higher when they formed ten billion years ago. A local dwarf galaxy, NGC 5253, has a young star cluster that provides an example of highly efficient star formation. Here we report the detection of the J = 3→2 rotational transition of CO at the location of the massive cluster. The gas cloud is hot, dense, quiescent and extremely dusty. Its gas-to-dust ratio is lower than the Galactic value, which we attribute to dust enrichment by the embedded star cluster. Its star-formation efficiency exceeds 50 per cent, tenfold that of clouds in the Milky Way. We suggest that high efficiency results from the force-feeding of star formation by a streamer of gas falling into the galaxy.

  20. Hydrothermal Synthesis of Hematite-Rich Spherules: Implications for Diagenesis and Hematite Spherule Formation in Outcrops at Meridiani Planum, Mars

    NASA Technical Reports Server (NTRS)

    Golden, D. C.; Ming, D. W.; Morris, R. V.; Graff, T. G.

    2007-01-01

    The Athena science payload onboard the Opportunity rover identified hematite-rich spherules (mean diameter of 4.2 +/- 0.8 mm) embedded in outcrops and occurring as lag deposits at Meridiani Planum. They have formed as diagenetic concretions from the rapid breakdown of pre-existing jarosite and other iron sulfates when chemically distinct groundwater passed through the sediments. Diagenetic, Fe-cemented concretions found in the Jurassic Navajo Formation, Utah and hematite-rich spherules found within sulfate-rich volcanic breccia on Mauna Kea volcano, Hawaii are possible terrestrial analogues for Meridiani spherules. The Navajo Formation concretions form in porous quartz arenite from the dissolution of iron oxides by reducing fluids and subsequent Fe precipitation to form spherical Fe- and Si-rich concretions. The Mauna Kea spherules form by hydrothermal, acid-sulfate alteration of basaltic tephra. The formation of hematite-rich spherules with similar chemical, mineralogical, and morphological properties to the Meridiani spherules is rare on Earth, so little is known about their formation conditions. In this study, we have synthesized in the laboratory hematite-rich spherules that are analogous in nearly all respects to the Meridiani spherules.

  1. A theory of ring formation around Be stars

    NASA Technical Reports Server (NTRS)

    Huang, S.-S.

    1976-01-01

    A theory for the formation of gaseous rings around Be stars is developed which involves the combined effect of stellar rotation and radiation pressure. A qualitative scenario of ring formation is outlined in which the envelope formed about a star from ejected material is in the form of a disk in the equatorial plane, collisions between ejected gas blobs are inevitable, and particles with high angular momenta form a rotating ring around the star. A quantitative description of this process is then formulated by considering the angular momentum and dynamical energy of the ejected matter as well as those of the ring alone, without introducing any other assumptions.

  2. Formation of gold grating structures on fused silica substrates by femtosecond laser irradiation

    NASA Astrophysics Data System (ADS)

    Takami, Akihiro; Nakajima, Yasutaka; Terakawa, Mitsuhiro

    2017-05-01

    Despite the attractive optical properties of gold nanostructures for emerging applications, the formation of sharp laser-induced periodic gold structures has not been reported. In this study, we experimentally demonstrate the formation of micro- and nanoscale periodic gold grating structures on fused silica substrates using a femtosecond laser. The experimental and calculated results show good agreement, indicating that the gold grating structures were formed by a beat formed in a gold thin film. We also propose that the beat was formed by interference of two surface plasmon polaritons with different periods excited in a gold thin film and calculated their periods.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vinegar, Harold J.; Carter, Ernest E.; Son, Jaime Santos

    Methods for forming a barrier around at least a portion of a treatment area in a subsurface formation are described herein. A material including wax may be introduced into one or more wellbores. The material introduced into two or more wells may mix in the formation and congeal to form a barrier to fluid flow.

  4. STAR CLUSTER FORMATION WITH STELLAR FEEDBACK AND LARGE-SCALE INFLOW

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Christopher D.; Jumper, Peter H., E-mail: matzner@astro.utoronto.ca

    2015-12-10

    During star cluster formation, ongoing mass accretion is resisted by stellar feedback in the form of protostellar outflows from the low-mass stars and photo-ionization and radiation pressure feedback from the massive stars. We model the evolution of cluster-forming regions during a phase in which both accretion and feedback are present and use these models to investigate how star cluster formation might terminate. Protostellar outflows are the strongest form of feedback in low-mass regions, but these cannot stop cluster formation if matter continues to flow in. In more massive clusters, radiation pressure and photo-ionization rapidly clear the cluster-forming gas when itsmore » column density is too small. We assess the rates of dynamical mass ejection and of evaporation, while accounting for the important effect of dust opacity on photo-ionization. Our models are consistent with the census of protostellar outflows in NGC 1333 and Serpens South and with the dust temperatures observed in regions of massive star formation. Comparing observations of massive cluster-forming regions against our model parameter space, and against our expectations for accretion-driven evolution, we infer that massive-star feedback is a likely cause of gas disruption in regions with velocity dispersions less than a few kilometers per second, but that more massive and more turbulent regions are too strongly bound for stellar feedback to be disruptive.« less

  5. The rapid assembly of an elliptical galaxy of 400 billion solar masses at a redshift of 2.3.

    PubMed

    Fu, Hai; Cooray, Asantha; Feruglio, C; Ivison, R J; Riechers, D A; Gurwell, M; Bussmann, R S; Harris, A I; Altieri, B; Aussel, H; Baker, A J; Bock, J; Boylan-Kolchin, M; Bridge, C; Calanog, J A; Casey, C M; Cava, A; Chapman, S C; Clements, D L; Conley, A; Cox, P; Farrah, D; Frayer, D; Hopwood, R; Jia, J; Magdis, G; Marsden, G; Martínez-Navajas, P; Negrello, M; Neri, R; Oliver, S J; Omont, A; Page, M J; Pérez-Fournon, I; Schulz, B; Scott, D; Smith, A; Vaccari, M; Valtchanov, I; Vieira, J D; Viero, M; Wang, L; Wardlow, J L; Zemcov, M

    2013-06-20

    Stellar archaeology shows that massive elliptical galaxies formed rapidly about ten billion years ago with star-formation rates of above several hundred solar masses per year. Their progenitors are probably the submillimetre bright galaxies at redshifts z greater than 2. Although the mean molecular gas mass (5 × 10(10) solar masses) of the submillimetre bright galaxies can explain the formation of typical elliptical galaxies, it is inadequate to form elliptical galaxies that already have stellar masses above 2 × 10(11) solar masses at z ≈ 2. Here we report multi-wavelength high-resolution observations of a rare merger of two massive submillimetre bright galaxies at z = 2.3. The system is seen to be forming stars at a rate of 2,000 solar masses per year. The star-formation efficiency is an order of magnitude greater than that of normal galaxies, so the gas reservoir will be exhausted and star formation will be quenched in only around 200 million years. At a projected separation of 19 kiloparsecs, the two massive starbursts are about to merge and form a passive elliptical galaxy with a stellar mass of about 4 × 10(11) solar masses. We conclude that gas-rich major galaxy mergers with intense star formation can form the most massive elliptical galaxies by z ≈ 1.5.

  6. Formation in professional education: an examination of the relationship between theories of meaning and theories of the self.

    PubMed

    Benner, Patricia

    2011-08-01

    Being formed through learning a practice is best understood within a constitutive theory of meaning as articulated by Charles Taylor. Disengaged views of the person cannot account for the formative changes in a person's identity and capacities upon learning a professional practice. Representational or correspondence theories of meaning cannot account for formation. Formation occurs over time because students actively seek and take up new concerns and learn new knowledge and skills. Engaged situated reasoning about underdetermined practice situations requires well-formed skillful clinicians caring for particular patients in particular situations.

  7. The Formation of a Sunspot Penumbra Sector in Active Region NOAA 12574

    NASA Astrophysics Data System (ADS)

    Li, Qiaoling; Yan, Xiaoli; Wang, Jincheng; Kong, DeFang; Xue, Zhike; Yang, Liheng; Cao, Wenda

    2018-04-01

    We present a particular case of the formation of a penumbra sector around a developing sunspot in the active region NOAA 12574 on 2016 August 11 by using the high-resolution data observed by the New Solar Telescope at the Big Bear Solar Observatory and the data acquired by the Helioseismic and Magnetic Imager and the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory satellite. Before the new penumbra sector formed, the developing sunspot already had two umbrae with some penumbral filaments. The penumbra sector gradually formed at the junction of two umbrae. We found that the formation of the penumbra sector can be divided into two stages. First, during the initial stage of penumbral formation, the region where the penumbra sector formed always appeared blueshifted in a Dopplergram. The area, mean transverse magnetic field strength, and total magnetic flux of the umbra and penumbra sector all increased with time. The initial penumbral formation was associated with magnetic emergence. Second, when the penumbra sector appeared, the magnetic flux and area of the penumbra sector increased after the umbra’s magnetic flux and area decreased. These results indicate that the umbra provided magnetic flux for penumbral development after the penumbra sector appeared. We also found that the newly formed penumbra sector was associated with sunspot rotation. Based on these findings, we suggest that the penumbra sector was the result of the emerging flux that was trapped in the photosphere at the initial stage of penumbral formation, and when the rudimentary penumbra formed, the penumbra sector developed at the cost of the umbra.

  8. Kinetics of Thermal Denaturation and Aggregation of Bovine Serum Albumin

    PubMed Central

    Borzova, Vera A.; Markossian, Kira A.; Chebotareva, Natalia A.; Kleymenov, Sergey Yu.; Poliansky, Nikolay B.; Muranov, Konstantin O.; Stein-Margolina, Vita A.; Shubin, Vladimir V.; Markov, Denis I.; Kurganov, Boris I.

    2016-01-01

    Thermal aggregation of bovine serum albumin (BSA) has been studied using dynamic light scattering, asymmetric flow field-flow fractionation and analytical ultracentrifugation. The studies were carried out at fixed temperatures (60°C, 65°C, 70°C and 80°C) in 0.1 M phosphate buffer, pH 7.0, at BSA concentration of 1 mg/ml. Thermal denaturation of the protein was studied by differential scanning calorimetry. Analysis of the experimental data shows that at 65°C the stage of protein unfolding and individual stages of protein aggregation are markedly separated in time. This circumstance allowed us to propose the following mechanism of thermal aggregation of BSA. Protein unfolding results in the formation of two forms of the non-native protein with different propensity to aggregation. One of the forms (highly reactive unfolded form, Uhr) is characterized by a high rate of aggregation. Aggregation of Uhr leads to the formation of primary aggregates with the hydrodynamic radius (Rh,1) of 10.3 nm. The second form (low reactive unfolded form, Ulr) participates in the aggregation process by its attachment to the primary aggregates produced by the Uhr form and possesses ability for self-aggregation with formation of stable small-sized aggregates (Ast). At complete exhaustion of Ulr, secondary aggregates with the hydrodynamic radius (Rh,2) of 12.8 nm are formed. At 60°C the rates of unfolding and aggregation are commensurate, at 70°C the rates of formation of the primary and secondary aggregates are commensurate, at 80°C the registration of the initial stages of aggregation is complicated by formation of large-sized aggregates. PMID:27101281

  9. The presence of biofilm forming microorganisms on hydrotherapy equipment and facilities.

    PubMed

    Jarząb, Natalia; Walczak, Maciej

    2017-10-01

    Hydrotherapy equipment provides a perfect environment for the formation and growth of microbial biofilms. Biofilms may reduce the microbiological cleanliness of hydrotherapy equipment and harbour opportunistic pathogens and pathogenic bacteria. The aims of this study were to investigate the ability of microorganisms that colonize hydrotherapy equipment to form biofilms, and to assess the influence of temperature and nutrients on the rate of biofilm formation. Surface swab samples were collected from the whirlpool baths, inhalation equipment and submerged surfaces of a brine pool at the spa center in Ciechocinek, Poland. We isolated and identified microorganisms from the swab samples and measured their ability to form biofilms. Biofilm formation was observed at a range of temperatures, in both nutrient-deficient and nutrient-rich environments. We isolated and identified microorganisms which are known to form biofilms on medical devices (e.g. Stenotrophomonas maltophilia). All isolates were classified as opportunistic pathogens, which can cause infections in humans with weakened immunity systems. All isolates showed the ability to form biofilms in the laboratory conditions. The potential for biofilm formation was higher in the presence of added nutrients. In addition, the hydrolytic activity of the biofilm was connected with the presence of nutrients.

  10. Rhizobium–legume symbiosis shares an exocytotic pathway required for arbuscule formation

    PubMed Central

    Ivanov, Sergey; Fedorova, Elena E.; Limpens, Erik; De Mita, Stephane; Genre, Andrea; Bonfante, Paola; Bisseling, Ton

    2012-01-01

    Endosymbiotic interactions are characterized by the formation of specialized membrane compartments, by the host in which the microbes are hosted, in an intracellular manner. Two well-studied examples, which are of major agricultural and ecological importance, are the widespread arbuscular mycorrhizal symbiosis and the Rhizobium–legume symbiosis. In both symbioses, the specialized host membrane that surrounds the microbes forms a symbiotic interface, which facilitates the exchange of, for example, nutrients in a controlled manner and, therefore, forms the heart of endosymbiosis. Despite their key importance, the molecular and cellular mechanisms underlying the formation of these membrane interfaces are largely unknown. Recent studies strongly suggest that the Rhizobium–legume symbiosis coopted a signaling pathway, including receptor, from the more ancient arbuscular mycorrhizal symbiosis to form a symbiotic interface. Here, we show that two highly homologous exocytotic vesicle-associated membrane proteins (VAMPs) are required for formation of the symbiotic membrane interface in both interactions. Silencing of these Medicago VAMP72 genes has a minor effect on nonsymbiotic plant development and nodule formation. However, it blocks symbiosome as well as arbuscule formation, whereas root colonization by the microbes is not affected. Identification of these VAMP72s as common symbiotic regulators in exocytotic vesicle trafficking suggests that the ancient exocytotic pathway forming the periarbuscular membrane compartment has also been coopted in the Rhizobium–legume symbiosis. PMID:22566631

  11. 41 CFR 101-30.401-1 - Publications providing Federal catalog data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Logistics Services Center (DLSC) files chosen, assembled, and formatted to meet recognized needs for... produced in microfiche form; however, some are produced in hard copy form. The following publications are... format for all descriptive-type item identifications. The data are arranged in NSN sequence within...

  12. Method and apparatus for forming conformal SiN.sub.x films

    DOEpatents

    Wang, Qi

    2007-11-27

    A silicon nitride film formation method includes: Heating a substrate to be subjected to film formation to a substrate temperature; heating a wire to a wire temperature; supplying silane, ammonia, and hydrogen gases to the heating member; and forming a silicon nitride film on the substrate.

  13. 78 FR 310 - Draft Revision of Guidance for Industry on Providing Regulatory Submissions in Electronic Format...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-03

    ...ApprovalProcess/FormsSubmissionRequirements/ElectronicSubmissions/ucm253101.htm , http://www.regulations.../Drugs/DevelopmentApprovalProcess/FormsSubmissionRequirements/ElectronicSubmissions/ucm253101.htm , http...), in a format that FDA can process, review, and archive. Currently, the Agency can process, review, and...

  14. Solar cell contact formation using laser ablation

    DOEpatents

    Harley, Gabriel; Smith, David D.; Cousins, Peter John

    2015-07-21

    The formation of solar cell contacts using a laser is described. A method of fabricating a back-contact solar cell includes forming a poly-crystalline material layer above a single-crystalline substrate. The method also includes forming a dielectric material stack above the poly-crystalline material layer. The method also includes forming, by laser ablation, a plurality of contacts holes in the dielectric material stack, each of the contact holes exposing a portion of the poly-crystalline material layer; and forming conductive contacts in the plurality of contact holes.

  15. Solar cell contact formation using laser ablation

    DOEpatents

    Harley, Gabriel; Smith, David; Cousins, Peter

    2012-12-04

    The formation of solar cell contacts using a laser is described. A method of fabricating a back-contact solar cell includes forming a poly-crystalline material layer above a single-crystalline substrate. The method also includes forming a dielectric material stack above the poly-crystalline material layer. The method also includes forming, by laser ablation, a plurality of contacts holes in the dielectric material stack, each of the contact holes exposing a portion of the poly-crystalline material layer; and forming conductive contacts in the plurality of contact holes.

  16. Solar cell contact formation using laser ablation

    DOEpatents

    Harley, Gabriel; Smith, David D.; Cousins, Peter John

    2014-07-22

    The formation of solar cell contacts using a laser is described. A method of fabricating a back-contact solar cell includes forming a poly-crystalline material layer above a single-crystalline substrate. The method also includes forming a dielectric material stack above the poly-crystalline material layer. The method also includes forming, by laser ablation, a plurality of contacts holes in the dielectric material stack, each of the contact holes exposing a portion of the poly-crystalline materiat layer; and forming conductive contacts in the plurality of contact holes.

  17. Fast Molecular Cloud Destruction Requires Fast Cloud Formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mac Low, Mordecai-Mark; Burkert, Andreas; Ibáñez-Mejía, Juan C., E-mail: mordecai@amnh.org, E-mail: burkert@usm.lmu.de, E-mail: ibanez@ph1.uni-koeln.de

    A large fraction of the gas in the Galaxy is cold, dense, and molecular. If all this gas collapsed under the influence of gravity and formed stars in a local free-fall time, the star formation rate in the Galaxy would exceed that observed by more than an order of magnitude. Other star-forming galaxies behave similarly. Yet, observations and simulations both suggest that the molecular gas is indeed gravitationally collapsing, albeit hierarchically. Prompt stellar feedback offers a potential solution to the low observed star formation rate if it quickly disrupts star-forming clouds during gravitational collapse. However, this requires that molecular cloudsmore » must be short-lived objects, raising the question of how so much gas can be observed in the molecular phase. This can occur only if molecular clouds form as quickly as they are destroyed, maintaining a global equilibrium fraction of dense gas. We therefore examine cloud formation timescales. We first demonstrate that supernova and superbubble sweeping cannot produce dense gas at the rate required to match the cloud destruction rate. On the other hand, Toomre gravitational instability can reach the required production rate. We thus argue that, although dense, star-forming gas may last only around a single global free-fall time; the dense gas in star-forming galaxies can globally exist in a state of dynamic equilibrium between formation by gravitational instability and disruption by stellar feedback. At redshift z ≳ 2, the Toomre instability timescale decreases, resulting in a prediction of higher molecular gas fractions at early times, in agreement with the observations.« less

  18. Radiation hydrodynamics simulations of the formation of direct-collapse supermassive stellar systems

    NASA Astrophysics Data System (ADS)

    Chon, Sunmyon; Hosokawa, Takashi; Yoshida, Naoki

    2018-04-01

    Formation of supermassive stars (SMSs) with mass ≳104 M⊙ is a promising pathway to seed the formation of supermassive black holes in the early universe. The so-called direct-collapse (DC) model postulates that such an SMS forms in a hot gas cloud irradiated by a nearby star-forming galaxy. We study the DC SMS formation in a fully cosmological context using three-dimensional radiation hydrodynamics simulations. We initialize our simulations using the outputs of the cosmological simulation of Chon et al., where two DC gas clouds are identified. The long-term evolution over a hundred thousand years is followed from the formation of embryo protostars through their growth to SMSs. We show that the strength of the tidal force by a nearby galaxy determines the multiplicity of the formed stars and affects the protostellar growth. In one case, where a collapsing cloud is significantly stretched by strong tidal force, multiple star-disc systems are formed via filament fragmentation. Small-scale fragmentation occurs in each circumstellar disc, and more than 10 stars with masses of a few ×103 M⊙ are finally formed. Interestingly, about a half of them are found as massive binary stars. In the other case, the gas cloud collapses nearly spherically under a relatively weak tidal field, and a single star-disc system is formed. Only a few SMSs with masses ˜104 M⊙ are found already after evolution of a hundred thousand years, and the SMSs are expected to grow further by gas accretion and to leave massive black holes at the end of their lives.

  19. Implications of Martian Phyllosilicate Formation Conditions to the Early Climate on Mars

    NASA Astrophysics Data System (ADS)

    Bishop, J. L.; Baker, L.; Fairén, A. G.; Michalski, J. R.; Gago-Duport, L.; Velbel, M. A.; Gross, C.; Rampe, E. B.

    2017-12-01

    We propose that short-term warmer and wetter environments, occurring sporadically in a generally cold early Mars, enabled formation of phyllosilicate-rich outcrops on the surface of Mars without requiring long-term warm and wet conditions. We are investigating phyllosilicate formation mechanisms including CO2 and H2O budgets to provide constraints on the early martian climate. We have evaluated the nature and stratigraphy of phyllosilicate-bearing surface units on Mars based on i) phyllosilicate-forming environments on Earth, ii) phyllosilicate reactions in the lab, and iii) modeling experiments involving phyllosilicates and short-range ordered (SRO) materials. The type of phyllosilicates that form on Mars depends on temperature, water/rock ratio, acidity, salinity and available ions. Mg-rich trioctahedral smectite mixtures are more consistent with subsurface formation environments (crustal, hydrothermal or alkaline lakes) up to 400 °C and are not associated with martian surface environments. In contrast, clay profiles dominated by dioctahedral Al/Fe-smectites are typically formed in subaqueous or subaerial surface environments. We propose models describing formation of smectite-rich outcrops and laterally extensive vertical profiles of Fe/Mg-smectites, sulfates, and Al-rich clay assemblages formed in surface environments. Further, the presence of abundant SRO materials without phyllosilicates could mark the end of the last warm and wet episode on Mars supporting smectite formation. Climate Implications for Early Mars: Clay formation reactions proceed extremely slowly at cool temperatures. The thick smectite outcrops observed on Mars through remote sensing would require standing water on Mars for hundreds of millions of years if they formed in waters 10-15 °C. However, warmer temperatures could have enabled faster production of these smectite-rich beds. Sporadic warming episodes to 30-40 °C could have enabled formation of these smectites over only tens or hundreds of thousands of years instead. Our analyses of the phyllosilicate record on Mars point to a scenario of brief warm and wet conditions that accounts for formation of substantial smectite clays in many locations, geologic features resulting from liquid water across the planet, and a generally cold and dry climate.

  20. Formation of methyl formate in comets by irradiation of methanol-bearing ices

    NASA Astrophysics Data System (ADS)

    Modica, P.; Palumbo, M. E.; Strazzulla, G.

    2012-12-01

    Methyl formate is a complex organic molecule considered potentially relevant as precursor of biologically active molecules. It has been observed in several astrophysical environments, such as hot cores, hot corinos, and comets. The processes that drive the formation of molecules in cometary ices are poorly understood. In particular it is not yet clear if molecules are directly accreted from the pre-solar nebula to form comets or are formed after accretion. The present work analyzes the possible role of cosmic ion irradiation and radioactive decay in methyl formate formation in methanol-bearing ices. The results indicate that cosmic ion irradiation can account for about 12% of the methyl formate observed in comet Hale-Bopp, while radioactive decay can account for about 6% of this amount. The need of new data coming from earth based and space observational projects as well as from laboratory experiments is outlined.

  1. Ensemble of European regional climate simulations for the winter of 2013 and 2014 from HadAM3P-RM3P

    NASA Astrophysics Data System (ADS)

    Schaller, Nathalie; Sparrow, Sarah N.; Massey, Neil R.; Bowery, Andy; Miller, Jonathan; Wilson, Simon; Wallom, David C. H.; Otto, Friederike E. L.

    2018-04-01

    Large data sets used to study the impact of anthropogenic climate change on the 2013/14 floods in the UK are provided. The data consist of perturbed initial conditions simulations using the Weather@Home regional climate modelling framework. Two different base conditions, Actual, including atmospheric conditions (anthropogenic greenhouse gases and human induced aerosols) as at present and Natural, with these forcings all removed are available. The data set is made up of 13 different ensembles (2 actual and 11 natural) with each having more than 7500 members. The data is available as NetCDF V3 files representing monthly data within the period of interest (1st Dec 2013 to 15th February 2014) for both a specified European region at a 50 km horizontal resolution and globally at N96 resolution. The data is stored within the UK Natural and Environmental Research Council Centre for Environmental Data Analysis repository.

  2. A visualization tool to support decision making in environmental and biological planning

    USGS Publications Warehouse

    Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.

    2014-01-01

    Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.

  3. McrEngine: A Scalable Checkpointing System Using Data-Aware Aggregation and Compression

    DOE PAGES

    Islam, Tanzima Zerin; Mohror, Kathryn; Bagchi, Saurabh; ...

    2013-01-01

    High performance computing (HPC) systems use checkpoint-restart to tolerate failures. Typically, applications store their states in checkpoints on a parallel file system (PFS). As applications scale up, checkpoint-restart incurs high overheads due to contention for PFS resources. The high overheads force large-scale applications to reduce checkpoint frequency, which means more compute time is lost in the event of failure. We alleviate this problem through a scalable checkpoint-restart system, mcrEngine. McrEngine aggregates checkpoints from multiple application processes with knowledge of the data semantics available through widely-used I/O libraries, e.g., HDF5 and netCDF, and compresses them. Our novel scheme improves compressibility ofmore » checkpoints up to 115% over simple concatenation and compression. Our evaluation with large-scale application checkpoints show that mcrEngine reduces checkpointing overhead by up to 87% and restart overhead by up to 62% over a baseline with no aggregation or compression.« less

  4. Efficient and Flexible Climate Analysis with Python in a Cloud-Based Distributed Computing Framework

    NASA Astrophysics Data System (ADS)

    Gannon, C.

    2017-12-01

    As climate models become progressively more advanced, and spatial resolution further improved through various downscaling projects, climate projections at a local level are increasingly insightful and valuable. However, the raw size of climate datasets presents numerous hurdles for analysts wishing to develop customized climate risk metrics or perform site-specific statistical analysis. Four Twenty Seven, a climate risk consultancy, has implemented a Python-based distributed framework to analyze large climate datasets in the cloud. With the freedom afforded by efficiently processing these datasets, we are able to customize and continually develop new climate risk metrics using the most up-to-date data. Here we outline our process for using Python packages such as XArray and Dask to evaluate netCDF files in a distributed framework, StarCluster to operate in a cluster-computing environment, cloud computing services to access publicly hosted datasets, and how this setup is particularly valuable for generating climate change indicators and performing localized statistical analysis.

  5. Collaboration tools and techniques for large model datasets

    USGS Publications Warehouse

    Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.

    2008-01-01

    In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.

  6. A flexible tool for diagnosing water, energy, and entropy budgets in climate models

    NASA Astrophysics Data System (ADS)

    Lembo, Valerio; Lucarini, Valerio

    2017-04-01

    We have developed a new flexible software for studying the global energy budget, the hydrological cycle, and the material entropy production of global climate models. The program receives as input radiative, latent and sensible energy fluxes, with the requirement that the variable names are in agreement with the Climate and Forecast (CF) conventions for the production of NetCDF datasets. Annual mean maps, meridional sections and time series are computed by means of Climate Data Operators (CDO) collection of command line operators developed at Max-Planck Institute for Meteorology (MPI-M). If a land-sea mask is provided, the program also computes the required quantities separately on the continents and oceans. Depending on the user's choice, the program also calls the MATLAB software to compute meridional heat transports and location and intensities of the peaks in the two hemispheres. We are currently planning to adapt the program in order to be included in the Earth System Model eValuation Tool (ESMValTool) community diagnostics.

  7. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  8. Weather forecasting with open source software

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Dörnbrack, Andreas

    2013-04-01

    To forecast the weather situation during aircraft-based atmospheric field campaigns, we employ a tool chain of existing and self-developed open source software tools and open standards. Of particular value are the Python programming language with its extension libraries NumPy, SciPy, PyQt4, Matplotlib and the basemap toolkit, the NetCDF standard with the Climate and Forecast (CF) Metadata conventions, and the Open Geospatial Consortium Web Map Service standard. These open source libraries and open standards helped to implement the "Mission Support System", a Web Map Service based tool to support weather forecasting and flight planning during field campaigns. The tool has been implemented in Python and has also been released as open source (Rautenhaus et al., Geosci. Model Dev., 5, 55-71, 2012). In this presentation we discuss the usage of free and open source software for weather forecasting in the context of research flight planning, and highlight how the field campaign work benefits from using open source tools and open standards.

  9. Methionine peptide formation under primordial earth conditions.

    PubMed

    Li, Feng; Fitz, Daniel; Fraser, Donald G; Rode, Bernd M

    2008-01-01

    According to recent research on the origin of life it seems more and more likely that amino acids and peptides were among the first biomolecules formed on earth and that a peptide/protein world was thus a key starting point in evolution towards life. Salt-induced Peptide Formation (SIPF) has repeatedly been shown to be the most universal and plausible peptide-forming reaction currently known under prebiotic conditions and forms peptides from amino acids with the help of copper ions and sodium chloride. In this paper we present experimental results for salt-induced peptide formation from methionine. This is the first time that a sulphur-containing amino acid was investigated in this reaction. The possible catalytic effects of glycine and L-histidine in this reaction were also investigated and a possible distinction between the L- and D-forms of methionine was studied as well.

  10. Low Temperature Sheet Forming

    NASA Astrophysics Data System (ADS)

    Voges-Schwieger, Kathrin; Hübner, Sven; Behrens, Bernd-Arno

    2011-05-01

    Metastable austenitic stainless steels change their lattice during forming operations by strain-induced alpha'-martensite formation. Temperatures below T = 20° C can accelerate the phase transformation while temperatures above T = 60° C may suppress the formation of martensite during the forming operation. In past investigations, the effect of high-strength martensitic regions in an austenitic ductile lattice was used in crash relevant parts for transportation vehicles. The local martensitic regions act as reinforcements leading to an increase in crash energy absorption. Moreover, they control the folding behavior as well as the force-distance-characteristic and increase the buckling resistance. This paper deals with a concerted thermomechanical drawing process to increase the local formation of alpha'-martensite caused by low temperatures.

  11. ROCKY PLANETESIMAL FORMATION VIA FLUFFY AGGREGATES OF NANOGRAINS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arakawa, Sota; Nakamoto, Taishi, E-mail: arakawa.s.ac@m.titech.ac.jp

    2016-12-01

    Several pieces of evidence suggest that silicate grains in primitive meteorites are not interstellar grains but condensates formed in the early solar system. Moreover, the size distribution of matrix grains in chondrites implies that these condensates might be formed as nanometer-sized grains. Therefore, we propose a novel scenario for rocky planetesimal formation in which nanometer-sized silicate grains are produced by evaporation and recondensation events in early solar nebula, and rocky planetesimals are formed via aggregation of these nanograins. We reveal that silicate nanograins can grow into rocky planetesimals via direct aggregation without catastrophic fragmentation and serious radial drift, and ourmore » results provide a suitable condition for protoplanet formation in our solar system.« less

  12. Current-sheet formation in two-dimensional coronal fields

    NASA Astrophysics Data System (ADS)

    Billinghurst, M. N.; Craig, I. J. D.; Sneyd, A. D.

    1993-11-01

    The formation of current sheets by shearing motions in line-tied twin-lobed fields is examined. A general analytic argument shows that current sheets form along the fieldline bounding the two lobes in the case of both symmetric and asymmetric footpoint motions. In the case of strictly antisymmetric motions however no current sheets can form. These findings are reinforced by magnetic relaxation experiments involving sheared two-lobed fields represented by Clebsh variables. It is pointed out that, although current singularites cannot be expected to form when the line-tying assumption is relaxed, the two-lobed geometry is still consistent with the formation of highly localised currents - and strong resistive dissipation - along field lines close to the bounding fieldline.

  13. Mechanistic aspects of the tyrosinase oxidation of hydroquinone.

    PubMed

    Ramsden, Christopher A; Riley, Patrick A

    2014-06-01

    Contradictory reports on the behaviour of hydroquinone as a tyrosinase substrate are reconciled in terms of the ability of the initially formed ortho-quinone to tautomerise to the thermodynamically more stable para-quinone isomer. Oxidation of phenols by native tyrosinase requires activation by in situ formation of a catechol formed via an enzyme generated ortho-quinone. In the special case of hydroquinone, catechol formation is precluded by rapid tautomerisation of the ortho-quinone precursor to catechol formation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Effect of Annealing on the Passive Film Stability and Corrosion Resistance of New Families of Iron-Based Amorphous Metals

    DTIC Science & Technology

    2011-06-01

    metallic glass easier to create and more stable once formed, thus improving the corrosion resistance. Adding titanium will enable the formation of...glass easier to create and more stable once formed, thus improving the corrosion resistance. Adding titanium will enable the formation of an extremely...research, it was hypothesized that additions of titanium could enable the formation of a protective titanium oxide film on the surface of the alloy

  15. Polarized protein transport and lumen formation during epithelial tissue morphogenesis.

    PubMed

    Blasky, Alex J; Mangan, Anthony; Prekeris, Rytis

    2015-01-01

    One of the major challenges in biology is to explain how complex tissues and organs arise from the collective action of individual polarized cells. The best-studied model of this process is the cross talk between individual epithelial cells during their polarization to form the multicellular epithelial lumen during tissue morphogenesis. Multiple mechanisms of apical lumen formation have been proposed. Some epithelial lumens form from preexisting polarized epithelial structures. However, de novo lumen formation from nonpolarized cells has recently emerged as an important driver of epithelial tissue morphogenesis, especially during the formation of small epithelial tubule networks. In this review, we discuss the latest findings regarding the mechanisms and regulation of de novo lumen formation in vitro and in vivo.

  16. Proteus vulgaris and Proteus mirabilis Decrease Candida albicans Biofilm Formation by Suppressing Morphological Transition to Its Hyphal Form.

    PubMed

    Lee, Kyoung Ho; Park, Su Jung; Choi, Sun Ju; Park, Joo Young

    2017-11-01

    Candida albicans (C. albicans) and Proteus species are causative agents in a variety of opportunistic nosocomial infections, and their ability to form biofilms is known to be a virulence factor. In this study, the influence of co-cultivation with Proteus vulgaris (P. vulgaris) and Proteus mirabilis (P. mirabilis) on C. albicans biofilm formation and its underlying mechanisms were examined. XTT reduction assays were adopted to measure biofilm formation, and viable colony counts were performed to quantify yeast growth. Real-time reverse transcriptase polymerase chain reaction was used to evaluate the expression of yeast-specific genes (rhd1 and rbe1), filament formation inhibiting genes (tup1 and nrg1), and hyphae-related genes (als3, ece1, hwp1, and sap5). Candida biofilm formation was markedly inhibited by treatment with either living or heat-killed P. vulgaris and P. mirabilis. Proteus-cultured supernatant also inhibited Candida biofilm formation. Likewise, treatment with live P. vulgaris or P. mirabilis or with Proteus-cultured supernatant decreased expression of hyphae-related C. albicans genes, while the expression of yeast-specific genes and the filament formation inhibiting genes of C. albicans were increased. Heat-killed P. vulgaris and P. mirabilis treatment, however, did not affect the expression of C. albicans morphology-related genes. These results suggest that secretory products from P. vulgaris and P. mirabilis regulate the expression of genes related to morphologic changes in C. albicans such that transition from the yeast form to the hyphal form can be inhibited. © Copyright: Yonsei University College of Medicine 2017

  17. Bacterial floc mediated rapid streamer formation in creeping flows

    NASA Astrophysics Data System (ADS)

    Hassanpourfard, Mahtab; Nikakhtari, Zahra; Ghosh, Ranajay; Das, Siddhartha; Thundat, Thomas; Kumar, Aloke

    2015-11-01

    One of the contentious problems regarding the interaction of low Reynolds number (Re << 1) fluid flow with bacterial biomass is the formation of filamentous structures called streamers. Recently, we discovered that streamers can be formed from flow-induced deformation of the pre-formed bacterial flocs over extremely small timescales (less than a second). However, these streamers are different than the ones that mediated by biofilms. To optically probe the inception process of these streamers formation, bacterial flocs were embedded with 200 nm red fluorescent polystyrene beads that served as tracers. We also showed that at their inception the deformation of the flocs is dominated by large recoverable strains indicating significant elasticity. These strains subsequently increase tremendously to produce filamentous streamers. At time scales larger than streamers formation time scale, viscous response was observed from streamers. Finally, rapid clogging of microfluidic devices occurred after these streamers formed.

  18. An initial study of void formation during solidification of aluminum in normal and reduced-gravity

    NASA Technical Reports Server (NTRS)

    Chiaramonte, Francis P.; Foerster, George; Gotti, Daniel J.; Neumann, Eric S.; Johnston, J. C.; De Witt, Kenneth J.

    1992-01-01

    Void formation due to volumetric shrinkage during aluminum solidification was observed in real time using a radiographic viewing system in normal and reduced gravity. An end chill directional solidification furnace with water quench was developed to solidify aluminum samples during the approximately 16 seconds of reduced gravity (+/- 0.02g) achieved by flying an aircraft through a parabolic trajectory. Void formation was recorded for two cases: first a nonwetting system; and second, a wetting system where wetting occurs between the aluminum and crucible lid. The void formation in the nonwetting case is similar in normal and reduced gravity, with a single vapor cavity forming at the top of the crucible. In the wetting case in reduced gravity, surface tension causes two voids to form in the top corners of the crucible, but in normal gravity only one large voids forms across the top.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lashuel, Hilal A.; Aljabari, Bayan; Sigurdsson, Einar M.

    We demonstrate herein that human macrophage migration inhibitory factor (MIF), a pro-inflammatory cytokine expressed in the brain and not previously considered to be amyloidogenic, forms amyloid fibrils similar to those derived from the disease associated amyloidogenic proteins {beta}-amyloid and {alpha}-synuclein. Acid denaturing conditions were found to readily induce MIF to undergo amyloid fibril formation. MIF aggregates to form amyloid-like structures with a morphology that is highly dependent on pH. The mechanism of MIF amyloid formation was probed by electron microscopy, turbidity, Thioflavin T binding, circular dichroism spectroscopy, and analytical ultracentrifugation. The fibrillar structures formed by MIF bind Congo red andmore » exhibit the characteristic green birefringence under polarized light. These results are consistent with the notion that amyloid fibril formation is not an exclusive property of a select group of amyloidogenic proteins, and contribute to a better understanding of the factors which govern protein conformational changes and amyloid fibril formation in vivo.« less

  20. Formative Assessment: Simply, No Additives

    ERIC Educational Resources Information Center

    Roskos, Kathleen; Neuman, Susan B.

    2012-01-01

    Among the types of assessment the closest to daily reading instruction is formative assessment. In contrast to summative assessment, which occurs after instruction, formative assessment involves forming judgments frequently in the flow of instruction. Key features of formative assessment include identifying gaps between where students are and…

  1. Electron Energy Loss Spectral Imaging of TiC Formed by Supernovae: A Scanning Transmission Electron Microscopy Study of Grain Formation and Alteration Mechanisms

    NASA Astrophysics Data System (ADS)

    Daulton, T. L.; Bernatowicz, T. J.; Croat, T. K.

    2012-03-01

    Micrometer-sized spherules of graphite formed by supernovae contain numerous TiC and Fe-Ni subgrains. These subgrains often have disordered surface rims. The mechanism(s) of rim formation on these subgrains is studied by transmission electron microscopy.

  2. 2 CFR 25.210 - Authority to modify agency application forms or formats.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Authority to modify agency application forms or formats. 25.210 Section 25.210 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS...

  3. 2 CFR 25.210 - Authority to modify agency application forms or formats.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 2 Grants and Agreements 1 2013-01-01 2013-01-01 false Authority to modify agency application forms or formats. 25.210 Section 25.210 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS...

  4. 2 CFR 25.210 - Authority to modify agency application forms or formats.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 2 Grants and Agreements 1 2012-01-01 2012-01-01 false Authority to modify agency application forms or formats. 25.210 Section 25.210 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS...

  5. 2 CFR 25.210 - Authority to modify agency application forms or formats.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false Authority to modify agency application forms or formats. 25.210 Section 25.210 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS...

  6. Cryptography Would Reveal Alterations In Photographs

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L.

    1995-01-01

    Public-key decryption method proposed to guarantee authenticity of photographic images represented in form of digital files. In method, digital camera generates original data from image in standard public format; also produces coded signature to verify standard-format image data. Scheme also helps protect against other forms of lying, such as attaching false captions.

  7. SDSS-IV MaNGA: Spatially Resolved Star Formation Main Sequence and LI(N)ER Sequence

    NASA Astrophysics Data System (ADS)

    Hsieh, B. C.; Lin, Lihwai; Lin, J. H.; Pan, H. A.; Hsu, C. H.; Sánchez, S. F.; Cano-Díaz, M.; Zhang, K.; Yan, R.; Barrera-Ballesteros, J. K.; Boquien, M.; Riffel, R.; Brownstein, J.; Cruz-González, I.; Hagen, A.; Ibarra, H.; Pan, K.; Bizyaev, D.; Oravetz, D.; Simmons, A.

    2017-12-01

    We present our study on the spatially resolved Hα and M * relation for 536 star-forming and 424 quiescent galaxies taken from the MaNGA survey. We show that the star formation rate surface density ({{{Σ }}}{SFR}), derived based on the Hα emissions, is strongly correlated with the M * surface density ({{{Σ }}}* ) on kiloparsec scales for star-forming galaxies and can be directly connected to the global star-forming sequence. This suggests that the global main sequence may be a consequence of a more fundamental relation on small scales. On the other hand, our result suggests that ∼20% of quiescent galaxies in our sample still have star formation activities in the outer region with lower specific star formation rate (SSFR) than typical star-forming galaxies. Meanwhile, we also find a tight correlation between {{{Σ }}}{{H}α } and {{{Σ }}}* for LI(N)ER regions, named the resolved “LI(N)ER” sequence, in quiescent galaxies, which is consistent with the scenario that LI(N)ER emissions are primarily powered by the hot, evolved stars as suggested in the literature.

  8. Dynamical Formation and Merger of Binary Black Holes

    NASA Astrophysics Data System (ADS)

    Stone, Nicholas

    2017-01-01

    The advent of gravitational wave (GW) astronomy began with Advanced LIGO's 2015 discovery of GWs from coalescing black hole (BH) binaries. GW astronomy holds great promise for testing general relativity, but also for investigating open astrophysical questions not amenable to traditional electromagnetic observations. One such question concerns the origin of stellar mass BH binaries in the universe: do these form primarily from evolution of isolated binaries of massive stars, or do they form through more exotic dynamical channels? The best studied dynamical formation channel involves multibody interactions of BHs and stars in dense globular cluster environments, but many other dynamical scenarios have recently been proposed, ranging from the Kozai effect in hierarchical triple systems to BH binary formation in the outskirts of Toomre-unstable accretion disks surrounding supermassive black holes. The BH binaries formed through these processes will have different distributions of observable parameters (e.g. mass ratios, spins) than BH binaries formed through the evolution of isolated binary stars. In my talk I will overview these and other dynamical formation scenarios, and summarize the key observational tests that will enable Advanced LIGO or other future detectors to determine what formation pathway creates the majority of binary BHs in the universe. NCS thanks NASA, which has funded his work through Einstein postdoctoral grant PF5-160145.

  9. Lysosomal enzyme cathepsin B enhances the aggregate forming activity of exogenous α-synuclein fibrils.

    PubMed

    Tsujimura, Atsushi; Taguchi, Katsutoshi; Watanabe, Yoshihisa; Tatebe, Harutsugu; Tokuda, Takahiko; Mizuno, Toshiki; Tanaka, Masaki

    2015-01-01

    The formation of intracellular aggregates containing α-synuclein (α-Syn) is one of the key steps in the progression of Parkinson's disease and dementia with Lewy bodies. Recently, it was reported that pathological α-Syn fibrils can undergo cell-to-cell transmission and form Lewy body-like aggregates. However, little is known about how they form α-Syn aggregates from fibril seeds. Here, we developed an assay to study the process of aggregate formation using fluorescent protein-tagged α-Syn-expressing cells and examined the aggregate forming activity of exogenous α-Syn fibrils. α-Syn fibril-induced formation of intracellular aggregates was suppressed by a cathepsin B specific inhibitor, but not by a cathepsin D inhibitor. α-Syn fibrils pretreated with cathepsin B in vitro enhanced seeding activity in cells. Knockdown of cathepsin B also reduced fibril-induced aggregate formation. Moreover, using LAMP-1 immunocytochemistry and live-cell imaging, we observed that these aggregates initially occurred in the lysosome. They then rapidly grew larger and moved outside the boundary of the lysosome within one day. These results suggest that the lysosomal protease cathepsin B is involved in triggering intracellular aggregate formation by α-Syn fibrils. Copyright © 2015. Published by Elsevier Inc.

  10. In vitro biofilm forming potential of Streptococcus suis isolated from human and swine in China.

    PubMed

    Dawei, Guo; Liping, Wang; Chengping, Lu

    2012-07-01

    Streptococcus suis is a swine pathogen and also a zoonotic agent. The formation of biofilms allows S. suis to become persistent colonizers and resist clearance by the host immune system and antibiotics. In this study, biofilm forming potentials of various S. suis strains were characterized by confocal laser scanning microscopy (CLSM), scanning electron microscopy (SEM) and tissue culture plates stained with crystal violet. In addition, the effects of five antimicrobial agents on biofilm formation were assayed in this study. S. suis produced biofilms on smooth and rough surface. The nutritional contents including glucose and NaCl in the growth medium modulated biofilm formation. There was a significant difference in their biofilm-forming ability among all 46 S. suis strains. The biofilm-forming potential of S. suis serotype 9 was stronger than type 2 and all other types. However, biofilm formation was inhibited by five commonly used antimicrobial agents, penicillin, erythromycin, azithromycin, ciprofloxacin, and ofloxacin at subinhibitory concentrations, among which inhibition of ciprofloxacin and ofloxacin was stronger than that of other three antimicrobial agents.Our study provides a detailed analysis of biofilm formation potential in S. suis, which is a step towards understanding its role in pathogenesis, and eventually lead to a better understanding of how to eradicate S. suis growing as biofilms with antibiotic therapy.

  11. Simulating the formation of carbon-rich molecules on an idealized graphitic surface

    NASA Astrophysics Data System (ADS)

    Marshall, David W.; Sadeghpour, H. R.

    2016-01-01

    There is accumulating evidence for the presence of complex molecules, including carbon-bearing and organic molecules, in the interstellar medium. Much of this evidence comes to us from studies of chemical composition, photo- and mass spectroscopy in cometary, meteoritic and asteroid samples, indicating a need to better understand the surface chemistry of astrophysical objects. There is also considerable interest in the origins of life-forming and life-sustaining molecules on the Earth. Here, we perform reactive molecular dynamics simulations to probe the formation of carbon-rich molecules and clusters on carbonaceous surfaces resembling dust grains and meteoroids. Our results show that large chains form on graphitic surfaces at low temperatures (100-500 K) and smaller fullerene-like molecules form at higher temperatures (2000-3000 K). The formation is faster on the surface than in the gas at low temperatures but slower at high temperatures as surface interactions prevent small clusters from coagulation. We find that for efficient formation of molecular complexity, mobility about the surface is important and helps to build larger carbon chains on the surface than in the gas phase at low temperatures. Finally, we show that the temperature of the surface strongly determines what kind of structures forms and that low turbulent environments are needed for efficient formation.

  12. Struvite formation and decomposition characteristics for ammonia and phosphorus recovery: A review of magnesium-ammonia-phosphate interactions.

    PubMed

    Tansel, Berrin; Lunn, Griffin; Monje, Oscar

    2018-03-01

    Struvite (MgNH 4 PO 4 ·6H 2 O) forms in aqueous systems with high ammonia and phosphate concentrations. However, conditions that result into struvite formation are highly dependent on the ionic compositions, temperature, pH, and ion speciation characteristics. The primary ions involved in struvite formation have complex interactions and can form different crystals depending on the ionic levels, pH and temperature. Struvite as well as struvite analogues (with substitution of monovalent cations for NH 4 + or divalent cations for Mg 2+ ) as well as other crystals can form simultaneously and result in changes in crystal morphology during crystal growth. This review provides the results from experimental and theoretical studies on struvite formation and decomposition studies. Characteristics of NH 4 + or divalent cations for Mg 2+ were evaluated in comparison to monovalent and divalent ions for formation of struvite and its analogues. Struvite crystals forming in wastewater systems are likely to contain crystals other than struvite due to ionic interactions, pH changes, temperature effects and clustering of ions during nucleation and crystal growth. Decomposition of struvite occurs following a series of reactions depending on the rate of heating, temperature and availability of water during heating. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Massive Galaxy in Its Core Formation Phase Three Billion Years After the Big Bang

    NASA Technical Reports Server (NTRS)

    Nelson, Erica; van Dokkum, Pieter; Franx, Marijn; Brammer, Gabriel; Momcheva, Ivelina; Schreiber, Natascha M. Forster; da Cunha, Elisabete; Tacconi, Linda; Bezanson, Rachel; Kirkpatrick, Allison; hide

    2014-01-01

    Most massive galaxies are thought to have formed their dense stellar cores at early cosmic epochs. However, cores in their formation phase have not yet been observed. Previous studies have found galaxies with high gas velocity dispersions or small apparent sizes but so far no objects have been identified with both the stellar structure and the gas dynamics of a forming core. Here we present a candidate core in formation 11 billion years ago, at z = 2.3. GOODS-N-774 has a stellar mass of 1.0 × 10 (exp 11) solar mass, a half-light radius of 1.0 kpc, and a star formation rate of 90 (sup +45 / sub -20) solar mass/yr. The star forming gas has a velocity dispersion 317 plus or minus 30 km/s, amongst the highest ever measured. It is similar to the stellar velocity dispersions of the putative descendants of GOODS-N-774, compact quiescent galaxies at z is approximately equal to 2 (exp 8-11) and giant elliptical galaxies in the nearby Universe. Galaxies such as GOODS-N-774 appear to be rare; however, from the star formation rate and size of the galaxy we infer that many star forming cores may be heavily obscured, and could be missed in optical and near-infrared surveys.

  14. Formation of ultra-compact dwarf galaxies from supergiant molecular clouds

    NASA Astrophysics Data System (ADS)

    Goodman, Morgan; Bekki, Kenji

    2018-05-01

    The origin of ultra-compact dwarf galaxies (UCDs) is not yet clear. One possible formation path of UCDs is the threshing of a nucleated elliptical dwarf galaxy (dE, N), however, it remains unclear how such massive nuclear stellar systems were formed in dwarf galaxies. To better establish the early history of UCDs, we investigate the formation of UCD progenitor clusters from super giant molecular clouds (SGMCs), using hydrodynamical simulations. In this study we focus on SGMCs with masses 107 - 108 M_{\\odot } that can form massive star clusters that display physical properties similar to UCDs. We find that the clusters have extended star formation histories with two phases, producing multiple distinct stellar populations, and that the star formation rate is dependent on the feedback effects of SNe and AGB stars. The later generations of stars formed in these clusters are more compact, leading to a clearly nested structure, and these stars will be more He-rich than those of the first generation, leading to a slight colour gradient. The simulated clusters demonstrate scaling relations between Reff and M and σv and M consistent with those observed in UCDs and strongly consistent with those of the original SGMC. We discuss whether SGMCs such as these can be formed through merging of self-gravitating molecular clouds in galaxies at high-z.

  15. DNA G-Wire Formation Using an Artificial Peptide is Controlled by Protease Activity.

    PubMed

    Usui, Kenji; Okada, Arisa; Sakashita, Shungo; Shimooka, Masayuki; Tsuruoka, Takaaki; Nakano, Shu-Ichi; Miyoshi, Daisuke; Mashima, Tsukasa; Katahira, Masato; Hamada, Yoshio

    2017-11-16

    The development of a switching system for guanine nanowire (G-wire) formation by external signals is important for nanobiotechnological applications. Here, we demonstrate a DNA nanostructural switch (G-wire <--> particles) using a designed peptide and a protease. The peptide consists of a PNA sequence for inducing DNA to form DNA-PNA hybrid G-quadruplex structures, and a protease substrate sequence acting as a switching module that is dependent on the activity of a particular protease. Micro-scale analyses via TEM and AFM showed that G-rich DNA alone forms G-wires in the presence of Ca 2+ , and that the peptide disrupted this formation, resulting in the formation of particles. The addition of the protease and digestion of the peptide regenerated the G-wires. Macro-scale analyses by DLS, zeta potential, CD, and gel filtration were in agreement with the microscopic observations. These results imply that the secondary structure change (DNA G-quadruplex <--> DNA/PNA hybrid structure) induces a change in the well-formed nanostructure (G-wire <--> particles). Our findings demonstrate a control system for forming DNA G-wire structures dependent on protease activity using designed peptides. Such systems hold promise for regulating the formation of nanowire for various applications, including electronic circuits for use in nanobiotechnologies.

  16. Method for closing a drift between adjacent in situ oil shale retorts

    DOEpatents

    Hines, Alex E.

    1984-01-01

    A row of horizontally spaced-apart in situ oil shale retorts is formed in a subterranean formation containing oil shale. Each row of retorts is formed by excavating development drifts at different elevations through opposite side boundaries of a plurality of retorts in the row of retorts. Each retort is formed by explosively expanding formation toward one or more voids within the boundaries of the retort site to form a fragmented permeable mass of formation particles containing oil shale in each retort. Following formation of each retort, the retort development drifts on the advancing side of the retort are closed off by covering formation particles within the development drift with a layer of crushed oil shale particles having a particle size smaller than the average particle size of oil shale particles in the adjacent retort. In one embodiment, the crushed oil shale particles are pneumatically loaded into the development drift to pack the particles tightly all the way to the top of the drift and throughout the entire cross section of the drift. The closure between adjacent retorts provided by the finely divided oil shale provides sufficient resistance to gas flow through the development drift to effectively inhibit gas flow through the drift during subsequent retorting operations.

  17. Formation of new stellar populations from gas accreted by massive young star clusters.

    PubMed

    Li, Chengyuan; de Grijs, Richard; Deng, Licai; Geller, Aaron M; Xin, Yu; Hu, Yi; Faucher-Giguère, Claude-André

    2016-01-28

    Stars in clusters are thought to form in a single burst from a common progenitor cloud of molecular gas. However, massive, old 'globular' clusters--those with ages greater than ten billion years and masses several hundred thousand times that of the Sun--often harbour multiple stellar populations, indicating that more than one star-forming event occurred during their lifetimes. Colliding stellar winds from late-stage, asymptotic-giant-branch stars are often suggested to be triggers of second-generation star formation. For this to occur, the initial cluster masses need to be greater than a few million solar masses. Here we report observations of three massive relatively young star clusters (1-2 billion years old) in the Magellanic Clouds that show clear evidence of burst-like star formation that occurred a few hundred million years after their initial formation era. We show that such clusters could have accreted sufficient gas to form new stars if they had orbited in their host galaxies' gaseous disks throughout the period between their initial formation and the more recent bursts of star formation. This process may eventually give rise to the ubiquitous multiple stellar populations in globular clusters.

  18. Kinematic evidence for feedback-driven star formation in NGC 1893

    NASA Astrophysics Data System (ADS)

    Lim, Beomdu; Sung, Hwankyung; Bessell, Michael S.; Lee, Sangwoo; Lee, Jae Joon; Oh, Heeyoung; Hwang, Narae; Park, Byeong-Gon; Hur, Hyeonoh; Hong, Kyeongsoo; Park, Sunkyung

    2018-06-01

    OB associations are the prevailing star-forming sites in the Galaxy. Up to now, the process of how OB associations were formed remained a mystery. A possible process is self-regulating star formation driven by feedback from massive stars. However, although a number of observational studies uncovered various signposts of feedback-driven star formation, the effectiveness of such feedback has been questioned. Stellar and gas kinematics is a promising tool to capture the relative motion of newborn stars and gas away from ionizing sources. We present high-resolution spectroscopy of stars and gas in the young open cluster NGC 1893. Our findings show that newborn stars and the tadpole nebula Sim 130 are moving away from the central cluster containing two O-type stars, and that the time-scale of sequential star formation is about 1 Myr within a 9 pc distance. The newborn stars formed by feedback from massive stars account for at least 18 per cent of the total stellar population in the cluster, suggesting that this process can play an important role in the formation of OB associations. These results support the self-regulating star formation model.

  19. 3-D Imagery Cockpit Display Development

    DTIC Science & Technology

    1990-08-01

    and ownship appeared in the lower right corner of the CLF, in the form XXX NM. The third data line showed individual aircraft altitude in the form XX K...data line showed aircraft heading in the form XXX ° . 4.2.5.3 Close-Look Format Options The Close-Look Format option switches are shown in Figure 4.2...vector, it is Need for declutter levels, redundant. Need range readout. Needs basic altitude reference. Low altitude warning system with sexy female

  20. Forming metal-intermetallic or metal-ceramic composites by self-propagating high-temperature reactions

    DOEpatents

    Rawers, James C.; Alman, David E.; Petty, Jr., Arthur V.

    1996-01-01

    Industrial applications of composites often require that the final product have a complex shape. In this invention intermetallic or ceramic phases are formed from sheets of unreacted elemental metals. The process described in this invention allows the final product shape be formed prior to the formation of the composite. This saves energy and allows formation of shaped articles of metal-intermetallic composites composed of brittle materials that cannot be deformed without breaking.

  1. Clumpy Disks as a Testbed for Feedback-regulated Galaxy Formation

    NASA Astrophysics Data System (ADS)

    Mayer, Lucio; Tamburello, Valentina; Lupi, Alessandro; Keller, Ben; Wadsley, James; Madau, Piero

    2016-10-01

    We study the dependence of fragmentation in massive gas-rich galaxy disks at z > 1 on stellar feedback schemes and hydrodynamical solvers, employing the GASOLINE2 SPH code and the lagrangian mesh-less code GIZMO in finite mass mode. Non-cosmological galaxy disk runs with the standard delayed-cooling blastwave feedback are compared with runs adopting a new superbubble feedback, which produces winds by modeling the detailed physics of supernova-driven bubbles and leads to efficient self-regulation of star formation. We find that, with blastwave feedback, massive star-forming clumps form in comparable number and with very similar masses in GASOLINE2 and GIZMO. Typical clump masses are in the range 107-108 M ⊙, lower than in most previous works, while giant clumps with masses above 109 M ⊙ are exceedingly rare. By contrast, superbubble feedback does not produce massive star-forming bound clumps as galaxies never undergo a phase of violent disk instability. In this scheme, only sporadic, unbound star-forming overdensities lasting a few tens of Myr can arise, triggered by non-linear perturbations from massive satellite companions. We conclude that there is severe tension between explaining massive star-forming clumps observed at z > 1 primarily as the result of disk fragmentation driven by gravitational instability and the prevailing view of feedback-regulated galaxy formation. The link between disk stability and star formation efficiency should thus be regarded as a key testing ground for galaxy formation theory.

  2. Arbitrary conditional discriminative functions of meaningful stimuli and enhanced equivalence class formation.

    PubMed

    Nedelcu, Roxana I; Fields, Lanny; Arntzen, Erik

    2015-03-01

    Equivalence class formation by college students was influenced through the prior acquisition of conditional discriminative functions by one of the abstract stimuli (C) in the to-be-formed classes. Participants in the GR-0, GR-1, and GR-5 groups attempted to form classes under the simultaneous protocol, after mastering 0, 1, or 5 conditional relations between C and other abstract stimuli (V, W, X, Y, Z) that were not included in the to-be-formed classes (ABCDE). Participants in the GR-many group attempted to form classes that contained four abstract stimuli and one meaningful picture as the C stimulus. In the GR-0, GR-1, GR-5, and GR-many groups, classes were formed by 17, 25, 58, and 67% of participants, respectively. Thus, likelihood of class formation was enhanced by the prior formation of five C-based conditional relations (the GR-5 vs. GR-0 condition), or the inclusion of a meaningful stimulus as a class member (the GR-many vs. GR-0 condition). The GR-5 and GR-many conditions produced very similar yields, indicating that class formation was enhanced to a similar degree by including a meaningful stimulus or an abstract stimulus that had become a member of five conditional relations prior to equivalence class formation. Finally, the low and high yields produced by the GR-1 and GR-5 conditions showed that the class enhancement effect of the GR-5 condition was due to the number of conditional relations established during preliminary training and not to the sheer amount of reinforcement provided while learning these conditional relations. Class enhancement produced by meaningful stimuli, then, can be attributed to their acquired conditional discriminative functions as well as their discriminative, connotative, and denotative properties. © Society for the Experimental Analysis of Behavior.

  3. A Higher Efficiency of Converting Gas to Stars Pushes Galaxies at z ˜ 1.6 Well Above the Star-forming Main Sequence

    NASA Astrophysics Data System (ADS)

    Silverman, J. D.; Daddi, E.; Rodighiero, G.; Rujopakarn, W.; Sargent, M.; Renzini, A.; Liu, D.; Feruglio, C.; Kashino, D.; Sanders, D.; Kartaltepe, J.; Nagao, T.; Arimoto, N.; Berta, S.; Béthermin, M.; Koekemoer, A.; Lutz, D.; Magdis, G.; Mancini, C.; Onodera, M.; Zamorani, G.

    2015-10-01

    Local starbursts have a higher efficiency of converting gas into stars, as compared to typical star-forming galaxies at a given stellar mass, possibly indicative of different modes of star formation. With the peak epoch of galaxy formation occurring at z > 1, it remains to be established whether such an efficient mode of star formation is occurring at high redshift. To address this issue, we measure the molecular gas content of seven high-redshift (z ˜ 1.6) starburst galaxies with the Atacama Large Millimeter/submillimeter Array and IRAM/Plateau de Bure Interferometer. Our targets are selected from the sample of Herschel far-infrared-detected galaxies having star formation rates (˜300-800 M⊙ yr-1) elevated (≳4×) above the star-forming main sequence (MS) and included in the FMOS-COSMOS near-infrared spectroscopic survey of star-forming galaxies at z ˜ 1.6 with Subaru. We detect CO emission in all cases at high levels of significance, indicative of high gas fractions (˜30%-50%). Even more compelling, we firmly establish with a clean and systematic selection that starbursts, identified as MS outliers, at high redshift generally have a lower ratio of CO to total infrared luminosity as compared to typical MS star-forming galaxies, although with a smaller offset than expected based on past studies of local starbursts. We put forward a hypothesis that there exists a continuous increase in star formation efficiency with elevation from the MS with galaxy mergers as a possible physical driver. Along with a heightened star formation efficiency, our high-redshift sample is similar in other respects to local starbursts, such as being metal rich and having a higher ionization state of the interstellar medium.

  4. Amyloid fibril formation in vitro from halophilic metal binding protein: Its high solubility and reversibility minimized formation of amorphous protein aggregations

    PubMed Central

    Tokunaga, Yuhei; Matsumoto, Mitsuharu; Tokunaga, Masao; Arakawa, Tsutomu; Sugimoto, Yasushi

    2013-01-01

    Halophilic proteins are characterized by high net negative charges and relatively small fraction of hydrophobic amino acids, rendering them aggregation resistant. These properties are also shared by histidine-rich metal binding protein (HP) from moderate halophile, Chromohalobacter salexigens, used in this study. Here, we examined how halophilic proteins form amyloid fibrils in vitro. His-tagged HP, incubated at pH 2.0 and 58°C, readily formed amyloid fibrils, as observed by thioflavin fluorescence, CD spectra, and transmission or atomic force microscopies. Under these low-pH harsh conditions, however, His-HP was promptly hydrolyzed to smaller peptides most likely responsible for rapid formation of amyloid fibril. Three major acid-hydrolyzed peptides were isolated from fibrils and turned out to readily form fibrils. The synthetic peptides predicted to form fibrils in these peptide sequences by Waltz software also formed fibrils. Amyloid fibril was also readily formed from full-length His-HP when incubated with 10–20% 2,2,2-trifluoroethanol at pH 7.8 and 25°C without peptide bond cleavage. PMID:24038709

  5. Metabolic changes associated with shoot formation in tobacco callus cultures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grady, K.L.

    1982-08-01

    Callus tissue derived from Nicotiana tabacum L. stem pith parenchyma cells was grown either on medium which maintains the callus in an undifferentiated state, or on medium which induces the formation of shoots. Two complementary types of studies were performed with the goal of establishing metabolic markers for the initiation of shoot formation: one designed to characterize the flow of radioactive sucrose into various metabolic pools, and one which allowed measurement of intermediary metabolite concentrations. In the former, callus tissue was incubated in (U-/sup 14/C)sucrose for periods up to one hour, and patterns of metabolite labelling in tissue grown onmore » shoot-forming and non-shoot-forming media were compared. In the latter studies, tissue was grown for an entire subculture period on non-shoot-forming medium labelled with (U-/sup 14/C)sucrose, then subcultured to labelled non-shoot-forming or shoot-forming media, and sampled at intervals during the first week of growth. 189 references.« less

  6. Formation of structures around HII regions: ionization feedback from massive stars

    NASA Astrophysics Data System (ADS)

    Tremblin, P.; Audit, E.; Minier, V.; Schmidt, W.; Schneider, N.

    2015-03-01

    We present a new model for the formation of dense clumps and pillars around HII regions based on shocks curvature at the interface between a HII region and a molecular cloud. UV radiation leads to the formation of an ionization front and of a shock ahead. The gas is compressed between them forming a dense shell at the interface. This shell may be curved due to initial interface or density modulation caused by the turbulence of the molecular cloud. Low curvature leads to instabilities in the shell that form dense clumps while sufficiently curved shells collapse on itself to form pillars. When turbulence is high compared to the ionized-gas pressure, bubbles of cold gas have sufficient kinetic energy to penetrate into the HII region and detach themselves from the parent cloud, forming cometary globules. Using computational simulations, we show that these new models are extremely efficient to form dense clumps and stable and growing elongated structures, pillars, in which star formation might occur (see Tremblin et al. 2012a). The inclusion of turbulence in the model shows its importance in the formation of cometary globules (see Tremblin et al. 2012b). Globally, the density enhancement in the simulations is of one or two orders of magnitude higher than the density enhancement of the classical ``collect and collapse`` scenario. The code used for the simulation is the HERACLES code, that comprises hydrodynamics with various equation of state, radiative transfer, gravity, cooling and heating. Our recent observations with Herschel (see Schneider et al. 2012a) and SOFIA (see Schneider et al. 2012b) and additional Spitzer data archives revealed many more of these structures in regions where OB stars have already formed such as the Rosette Nebula, Cygnus X, M16 and Vela, suggesting that the UV radiation from massive stars plays an important role in their formation. We present a first comparison between the simulations described above and recent observations of these regions.

  7. 7 CFR 1755.407 - Data formats.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Data formats. 1755.407 Section 1755.407 Agriculture... TELECOMMUNICATIONS POLICIES ON SPECIFICATIONS, ACCEPTABLE MATERIALS, AND STANDARD CONTRACT FORMS § 1755.407 Data formats. The following suggested formats listed in this section may be used for recording the test data...

  8. 7 CFR 1755.407 - Data formats.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Data formats. 1755.407 Section 1755.407 Agriculture... TELECOMMUNICATIONS POLICIES ON SPECIFICATIONS, ACCEPTABLE MATERIALS, AND STANDARD CONTRACT FORMS § 1755.407 Data formats. The following suggested formats listed in this section may be used for recording the test data...

  9. 7 CFR 1755.407 - Data formats.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Data formats. 1755.407 Section 1755.407 Agriculture... TELECOMMUNICATIONS POLICIES ON SPECIFICATIONS, ACCEPTABLE MATERIALS, AND STANDARD CONTRACT FORMS § 1755.407 Data formats. The following suggested formats listed in this section may be used for recording the test data...

  10. 7 CFR 1755.407 - Data formats.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Data formats. 1755.407 Section 1755.407 Agriculture... TELECOMMUNICATIONS POLICIES ON SPECIFICATIONS, ACCEPTABLE MATERIALS, AND STANDARD CONTRACT FORMS § 1755.407 Data formats. The following suggested formats listed in this section may be used for recording the test data...

  11. 7 CFR 1755.407 - Data formats.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Data formats. 1755.407 Section 1755.407 Agriculture... TELECOMMUNICATIONS POLICIES ON SPECIFICATIONS, ACCEPTABLE MATERIALS, AND STANDARD CONTRACT FORMS § 1755.407 Data formats. The following suggested formats listed in this section may be used for recording the test data...

  12. Products of BVOC oxidation: ozone and organic aerosols

    NASA Astrophysics Data System (ADS)

    Wildt, Jürgen; Andres, Stefanie; Carriero, Giulia; Ehn, Mikael; Fares, Silvano; Hoffmann, Thorsten; Hacker, Lina; Kiendler-Scharr, Astrid; Kleist, Einhard; Paoletti, Elena; Pullinen, Iida; Rohrer, Franz; Rudich, Yinon; Springer, Monika; Tillmann, Ralf; Wahner, Andreas; Wu, Cheng; Mentel, Thomas

    2015-04-01

    Biogenic Volatile Organic Compounds (BVOC) are important precursors in photochemical O3 and secondary organic aerosol (SOA) formation. We conducted a series of laboratory experiments with OH-induced oxidation of monoterpenes to elucidate pathways and efficiencies of O3 and SOA formation. At high NOx conditions ([BVOC] / [NOx] < 7 ppbC / ppb) photochemical ozone formation was observed. For -pinene as individual BVOC as well as for the monoterpene mixes emitted from different plant species we observed increasing ozone formation with increasing [NOX]. Between 2 and 3 O3-molecules were formed from 1 monoterpene when ozone formation was BVOC limited. Under such high NOX conditions, new particle formation was suppressed. Increasing [BVOC] / [NOX] ratios caused increasing efficiency of new particle formation indicating that peroxy radicals are the key intermediates in both, photochemical ozone- and new particle formation. The classical chemistry of peroxy radicals is well established (e.g. Master Chemical Mechanism). Peroxy radicals are produced by addition of molecular oxygen to the alkyl radical formed after OH attack at the BVOC. They either react with NO which leads to ozone formation or they react with other peroxy radicals and form chemically stable products (hydroperoxides, alkoholes and ketones). Much less knowledge exists on such reactions for Highly Oxidized Peroxy Radicals, (HOPR). Such HOPR were observed during ozonolysis of several volatiles and, in case of monoterpenes as precursors, they can contain more than 12 Oxygen atoms (Mentel et al., 2015). Although the OH-initiated formation of HOPR is yet not fully understood, their basic gas phase reactions seem to follow classical photochemical rules. In reactions with NO they can act as precursor for O3 and in reactions with other HOPR or with classical less oxidized peroxy radicals they can form highly oxidized stable products and alkoxy radicals. In addition, HOPR-HOPR reactions lead to the formation of dimers that, in case of monoterpenes as reactants, consist of a skeleton with 20 carbon atoms. These dimers seem to play a major role in new particle formation and their existence may explain the observations of Wildt et al. (2014) who found power law dependence with an exponent approaching -2 between new particle formation and ozone formation. The monomer products of HOPR-HOPR reactions play a dominant role in SOA mass formation because their vapour pressures are low enough to allow condensation on pre-existing particulate matter (Ehn et al., 2014). Furthermore, the minor impacts of NOX on particle mass formation (Wildt et al., 2014) are explainable by similar yields of alkoxy radicals in HOPR-HOPR and HOPR-NO reactions, respectively.

  13. Disruption of Giant Molecular Clouds by Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Harper-Clark, Elizabeth

    The lifetime of a Giant Molecular Cloud (GMC) and the total mass of stars that form within it are crucial to the understanding of star formation rates across a whole galaxy. In particular, the stars within a GMC may dictate its disruption and the quenching of further star formation. Indeed, observations show that the Milky Way contains GMCs with extensive expanding bubbles while the most massive stars are still alive. Simulating entire GMCs is challenging, due to the large variety of physics that needs to be included, and the computational power required to accurately simulate a GMC over tens of millions of years. Using the radiative-magneto-hydrodynamic code Enzo, I have run many simulations of GMCs. I obtain robust results for the fraction of gas converted into stars and the lifetimes of the GMCs: (A) In simulations with no stellar outputs (or "feedback''), clusters form at a rate of 30% of GMC mass per free fall time; the GMCs were not disrupted but contained forming stars. (B) Including ionization gas pressure or radiation pressure into the simulations, both separately and together, the star formation was quenched at between 5% and 21% of the original GMC mass. The clouds were fully disrupted within two dynamical times after the first cluster formed. The radiation pressure contributed the most to the disruption of the GMC and fully quenched star formation even without ionization. (C) Simulations that included supernovae showed that they are not dynamically important to GMC disruption and have only minor effects on subsequent star formation. (D) The inclusion of a few micro Gauss magnetic field across the cloud slightly reduced the star formation rate but accelerated GMC disruption by reducing bubble shell disruption and leaking. These simulations show that new born stars quench further star formation and completely disrupt the parent GMC. The low star formation rate and the short lifetimes of GMCs shown here can explain the low star formation rate across the whole galaxy.

  14. Phenotypic and genotypic characteristics associated with biofilm formation in clinical isolates of atypical enteropathogenic Escherichia coli (aEPEC) strains.

    PubMed

    Nascimento, Heloisa H; Silva, Lucas E P; Souza, Renata T; Silva, Neusa P; Scaletsky, Isabel C A

    2014-07-10

    Biofilm formation by enteropathogenic Escherichia coli (EPEC) have been recently described in the prototype typical EPEC E2348/69 strain and in an atypical EPEC O55:H7 strain. In this study, we sought to evaluate biofilm formation in a collection of 126 atypical EPEC strains isolated from 92 diarrheic and 34 nondiarrheic children, belonging to different serotypes. The association of biofilm formation and adhesin-related genes were also investigated. Biofilm formation occurred in 37 (29%) strains of different serotypes, when the assays were performed at 26°C and 37°C for 24 h. Among these, four strains (A79, A87, A88, and A111) formed a stronger biofilm than did the others. The frequency of biofilm producers was higher among isolates from patients compared with isolates from controls (34.8% vs 14.7%; P = 0.029). An association was found between biofilm formation and expression of type 1 fimbriae and curli (P < 0.05). Unlike the previously described aEPEC O55:H7, one aEPEC O119:HND strain (A111) formed a strong biofilm and pellicle at the air-liquid interface, but did not express curli. Transposon mutagenesis was used to identify biofilm-deficient mutants. Transposon insertion sequences of six mutants revealed similarity with type 1 fimbriae (fimC, fimD, and fimH), diguanylate cyclase, ATP synthase F1, beta subunit (atpD), and the uncharacterized YjiC protein. All these mutants were deficient in biofilm formation ability. This study showed that the ability to adhere to abiotic surfaces and form biofilm is present in an array of aEPEC strains. Moreover, it seems that the ability to form biofilms is associated with the presence of type 1 fimbriae and diguanylate cyclase. Characterization of additional biofilm formation mutants may reveal other mechanisms involved in biofilm formation and bring new insights into aEPEC adhesion and pathogenesis.

  15. Effect of Formative Quizzes on Teacher Candidates' Learning in General Chemistry

    ERIC Educational Resources Information Center

    Yalaki, Yalcin; Bayram, Zeki

    2015-01-01

    Formative assessment or assessment for learning is one of the most emphasized educational innovations around the world. Two of the common strategies that could be used in formative assessment are use of summative tests for formative purposes and comment only marking. We utilized these strategies in the form of formative quizzes in a general…

  16. Biofilm formation is not associated with worse outcome in Acinetobacter baumannii bacteraemic pneumonia.

    PubMed

    Wang, Yung-Chih; Huang, Tzu-Wen; Yang, Ya-Sung; Kuo, Shu-Chen; Chen, Chung-Ting; Liu, Chang-Pan; Liu, Yuag-Meng; Chen, Te-Li; Chang, Feng-Yee; Wu, Shih-Hsiung; How, Chorng-Kuang; Lee, Yi-Tzu

    2018-05-08

    The effect of biofilm formation on bacteraemic pneumonia caused by A. baumannii is unknown. We conducted a 4-year multi-center retrospective study to analyze 71 and 202 patients with A. baumannii bacteraemic pneumonia caused by biofilm-forming and non-biofilm-forming isolates, respectively. The clinical features and outcomes of patients were investigated. Biofilm formation was determined by a microtitre plate assay. The antimicrobial susceptibilities of biofilm-associated cells were assessed using the minimum biofilm eradication concentration (MBEC) assay. Whole-genome sequencing was conducted to identify biofilm-associated genes and their promoters. Quantitative reverse transcription polymerase chain reaction was performed to confirm the expression difference of biofilm-associated genes. There was no significant difference in the clinical characteristics or the outcomes between patients infected with biofilm-forming and non-biofilm-forming strains. Compared with non-biofilm-forming isolates, biofilm-forming isolates exhibited lower resistance to most antimicrobials tested, including imipenem, meropenem, ceftazidime, ciprofloxacin and gentamicin; however, the MBEC assay confirmed the increased antibiotic resistance of the biofilm-embedded bacteria. Biofilm-associated genes and their promoters were detected in most isolates, including the non-biofilm-forming strains. Biofilm-forming isolates showed higher levels of expression of the biofilm-associated genes than non-biofilm-forming isolates. The biofilm-forming ability of A. baumannii isolates might not be associated with worse outcomes in patients with bacteraemic pneumonia.

  17. Biofilm Formation Characteristics of Pseudomonas lundensis Isolated from Meat.

    PubMed

    Liu, Yong-Ji; Xie, Jing; Zhao, Li-Jun; Qian, Yun-Fang; Zhao, Yong; Liu, Xiao

    2015-12-01

    Biofilms formations of spoilage and pathogenic bacteria on food or food contact surfaces have attracted increasing attention. These events may lead to a higher risk of food spoilage and foodborne disease transmission. While Pseudomonas lundensis is one of the most important bacteria that cause spoilage in chilled meat, its capability for biofilm formation has been seldom reported. Here, we investigated biofilm formation characteristics of P. lundensis mainly by using crystal violet staining, and confocal laser scanning microscopy (CLSM). The swarming and swimming motility, biofilm formation in different temperatures (30, 10, and 4 °C) and the protease activity of the target strain were also assessed. The results showed that P. lundensis showed a typical surface-associated motility and was quite capable of forming biofilms in different temperatures (30, 10, and 4 °C). The strain began to adhere to the contact surfaces and form biofilms early in the 4 to 6 h. The biofilms began to be formed in massive amounts after 12 h at 30 °C, and the extracellular polysaccharides increased as the biofilm structure developed. Compared with at 30 °C, more biofilms were formed at 4 and 10 °C even by a low bacterial density. The protease activity in the biofilm was significantly correlated with the biofilm formation. Moreover, the protease activity in biofilm was significantly higher than that of the corresponding planktonic cultures after cultured 12 h at 30 °C. © 2015 Institute of Food Technologists®

  18. NW-MILO Acoustic Data Collection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Myers, Joshua R.; Maxwell, Adam R.

    2010-02-17

    There is an enduring requirement to improve our ability to detect potential threats and discriminate these from the legitimate commercial and recreational activity ongoing in the nearshore/littoral portion of the maritime domain. The Northwest Maritime Information and Littoral Operations (NW-MILO) Program at PNNL’s Coastal Security Institute in Sequim, Washington is establishing a methodology to detect and classify these threats - in part through developing a better understanding of acoustic signatures in a near-shore environment. The purpose of the acoustic data collection described here is to investigate the acoustic signatures of small vessels. The data is being recorded continuously, 24 hoursmore » a day, along with radar track data and imagery. The recording began in August 2008, and to date the data contains tens of thousands of signals from small vessels recorded in a variety of environmental conditions. The quantity and variety of this data collection, with the supporting imagery and radar track data, makes it particularly useful for the development of robust acoustic signature models and advanced algorithms for signal classification and information extraction. The underwater acoustic sensing system is part of a multi-modal sensing system that is operating near the mouth of Sequim Bay. Sequim Bay opens onto the Straight of Juan de Fuca, which contains part of the border between the U.S. and Canada. Table 1 lists the specific components used for the NW-MILO system. The acoustic sensor is a hydrophone permanently deployed at a mean depth of about 3 meters. In addition to a hydrophone, the other sensors in the system are a marine radar, an electro-optical (EO) camera and an infra-red (IR) camera. The radar is integrated with a vessel tracking system (VTS) that provides position, speed and heading information. The data from all the sensors is recorded and saved to a central server. The data has been validated in terms of its usability for characterizing the signatures of small vessels. The sampling rate of 8 kHz and low pass filtering to 2 kHz results in an alias-free signal in the frequency band that is appropriate for small vessels. Calibration was performed using a Lubell underwater speaker so that the raw data signal levels can be converted to sound pressure. Background noise is present due to a nearby pump and as a result of tidal currents. More study is needed to fully characterize the noise, but it does not pose an obstacle to using the acoustic data for the purposes of vessel detection and signature analysis. The detection range for a small vessel was estimated using the calibrated voltage response of the system and a cylindrical spreading model for transmission loss. The sound pressure of a typical vessel with an outboard motor was found to be around 140 dB mPa, and could theoretically be detected from 10 km away. In practical terms, a small vessel could reliably be detected from 3 - 5 km away. The data is archived in netCDF files, a standard scientific file format that is "self describing". This means that each data file contains the metadata - timestamps, units, origin, etc. - needed to make the data meaningful and portable. Other file formats, such as XML, are also supported. A visualization tool has been developed to view the acoustic data in the form of spectrograms, along with the coincident radar track data and camera images.« less

  19. The Formation of Mini-Neptunes

    NASA Astrophysics Data System (ADS)

    Venturini, Julia; Helled, Ravit

    2017-10-01

    Mini-Neptunes seem to be common planets. In this work we investigate the possible formation histories and predicted occurrence rates of mini-Neptunes, assuming that the planets form beyond the iceline. We consider pebble and planetesimal accretion accounting for envelope enrichment and two different opacity conditions. We find that the formation of mini-Neptunes is a relatively frequent output when envelope enrichment by volatiles is included, and that there is a “sweet spot” for mini-Neptune formation with a relatively low solid accretion rate of ˜10-6 M ⊕ yr-1. This rate is typical for low/intermediate-mass protoplanetary disks and/or disks with low metallicities. With pebble accretion, envelope enrichment and high opacity favor the formation of mini-Neptunes, with more efficient formation at large semimajor axes (˜30 au) and low disk viscosities. For planetesimal accretion, such planets can also form without enrichment, with the opacity being a key aspect in the growth history and favorable formation location. Finally, we show that the formation of Neptune-like planets remains a challenge for planet formation theories.

  20. Distance-dependent duplex DNA destabilization proximal to G-quadruplex/i-motif sequences

    PubMed Central

    König, Sebastian L. B.; Huppert, Julian L.; Sigel, Roland K. O.; Evans, Amanda C.

    2013-01-01

    G-quadruplexes and i-motifs are complementary examples of non-canonical nucleic acid substructure conformations. G-quadruplex thermodynamic stability has been extensively studied for a variety of base sequences, but the degree of duplex destabilization that adjacent quadruplex structure formation can cause has yet to be fully addressed. Stable in vivo formation of these alternative nucleic acid structures is likely to be highly dependent on whether sufficient spacing exists between neighbouring duplex- and quadruplex-/i-motif-forming regions to accommodate quadruplexes or i-motifs without disrupting duplex stability. Prediction of putative G-quadruplex-forming regions is likely to be assisted by further understanding of what distance (number of base pairs) is required for duplexes to remain stable as quadruplexes or i-motifs form. Using oligonucleotide constructs derived from precedented G-quadruplexes and i-motif-forming bcl-2 P1 promoter region, initial biophysical stability studies indicate that the formation of G-quadruplex and i-motif conformations do destabilize proximal duplex regions. The undermining effect that quadruplex formation can have on duplex stability is mitigated with increased distance from the duplex region: a spacing of five base pairs or more is sufficient to maintain duplex stability proximal to predicted quadruplex/i-motif-forming regions. PMID:23771141

Top