NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4
2012-09-26
defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Temple, Brian Allen; Armstrong, Jerawan Chudoung
This document is a mid-year report on a deliverable for the PYTHON Radiography Analysis Tool (PyRAT) for project LANL12-RS-107J in FY15. The deliverable is deliverable number 2 in the work package and is titled “Add the ability to read in more types of image file formats in PyRAT”. Right now PyRAT can only read in uncompressed TIF files (tiff files). It is planned to expand the file formats that can be read by PyRAT, making it easier to use in more situations. A summary of the file formats added include jpeg, jpg, png and formatted ASCII files.
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... participants shall make textual (or, where non-text, image) versions of their documents available on a web... of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or PNG (Portable...
10 CFR 2.1011 - Management of electronic information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... production and service: (i) The participants shall make textual (or, where non-text, image) versions of their... set and be in one of the following acceptable formats: ASCII, native word processing (Word, WordPerfect), PDF Normal, or HTML. (iv) Image files must be formatted as TIFF CCITT G4 for bi-tonal images or...
1998-07-01
all the MS Word files into FrameMaker + SGML format and use the FrameMaker application to SGML tag all of the data in accordance with the Army TM...Document Type Definitions (DTDs) in MIL-STD- 2361. The edited SGML tagged files are saved as PDF files for delivery to the field. The FrameMaker ...as TIFF files and being imported into FrameMaker prior to saving the TMs as PDF files. Since the hardware to be used by the AN/PPS-5 technician is
Kim, J H; Kang, S W; Kim, J-r; Chang, Y S
2014-01-01
Purpose To evaluate the effect of image compression of spectral-domain optical coherence tomography (OCT) images in the examination of eyes with exudative age-related macular degeneration (AMD). Methods Thirty eyes from 30 patients who were diagnosed with exudative AMD were included in this retrospective observational case series. The horizontal OCT scans centered at the center of the fovea were conducted using spectral-domain OCT. The images were exported to Tag Image File Format (TIFF) and 100, 75, 50, 25 and 10% quality of Joint Photographic Experts Group (JPEG) format. OCT images were taken before and after intravitreal ranibizumab injections, and after relapse. The prevalence of subretinal and intraretinal fluids was determined. Differences in choroidal thickness between the TIFF and JPEG images were compared with the intra-observer variability. Results The prevalence of subretinal and intraretinal fluids was comparable regardless of the degree of compression. However, the chorio–scleral interface was not clearly identified in many images with a high degree of compression. In images with 25 and 10% quality of JPEG, the difference in choroidal thickness between the TIFF images and the respective JPEG images was significantly greater than the intra-observer variability of the TIFF images (P=0.029 and P=0.024, respectively). Conclusions In OCT images of eyes with AMD, 50% of the quality of the JPEG format would be an optimal degree of compression for efficient data storage and transfer without sacrificing image quality. PMID:24788012
NASA Astrophysics Data System (ADS)
Haran, T. M.; Brodzik, M. J.; Nordgren, B.; Estilow, T.; Scott, D. J.
2015-12-01
An increasing number of new Earth science datasets are being producedby data providers in self-describing, machine-independent file formatsincluding Hierarchical Data Format version 5 (HDF5) and NetworkCommon Data Form version 4 (netCDF-4). Furthermore data providers maybe producing netCDF-4 files that follow the conventions for Climateand Forecast metadata version 1.6 (CF 1.6) which, for datasets mappedto a projected raster grid covering all or a portion of the earth,includes the Coordinate Reference System (CRS) used to define howlatitude and longitude are mapped to grid coordinates, i.e. columnsand rows, and vice versa. One problem that users may encounter is thattheir preferred visualization and analysis tool may not yet includesupport for one of these newer formats. Moreover, data distributorssuch as NASA's NSIDC DAAC may not yet include support for on-the-flyconversion of data files for all data sets produced in a new format toa preferred older distributed format.There do exist open source solutions to this dilemma in the form ofsoftware packages that can translate files in one of the new formatsto one of the preferred formats. However these software packagesrequire that the file to be translated conform to the specificationsof its respective format. Although an online CF-Convention compliancechecker is available from cfconventions.org, a recent NSIDC userservices incident described here in detail involved an NSIDC-supporteddata set that passed the (then current) CF Checker Version 2.0.6, butwas in fact lacking two variables necessary for conformance. Thisproblem was not detected until GDAL, a software package which reliedon the missing variables, was employed by a user in an attempt totranslate the data into a different file format, namely GeoTIFF.This incident indicates that testing a candidate data product with oneor more software products written to accept the advertised conventionsis proposed as a practice which improves interoperability. Differencesbetween data file contents and software package expectations areexposed, affording an opportunity to improve conformance of software,data or both. The incident can also serve as a demonstration that dataproviders, distributors, and users can work together to improve dataproduct quality and interoperability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.
Digital seismic-reflection data from western Rhode Island Sound, 1980
McMullen, K.Y.; Poppe, L.J.; Soderberg, N.K.
2009-01-01
During 1980, the U.S. Geological Survey (USGS) conducted a seismic-reflection survey in western Rhode Island Sound aboard the Research Vessel Neecho. Data from this survey were recorded in analog form and archived at the USGS Woods Hole Science Center's Data Library. Due to recent interest in the geology of Rhode Island Sound and in an effort to make the data more readily accessible while preserving the original paper records, the seismic data from this cruise were scanned and converted to Tagged Image File Format (TIFF) images and SEG-Y data files. Navigation data were converted from U.S. Coast Guard Long Range Aids to Navigation (LORAN-C) time delays to latitudes and longitudes, which are available in Environmental Systems Research Institute, Inc. (ESRI) shapefile format and as eastings and northings in space-delimited text format.
Processed Thematic Mapper Satellite Imagery for Selected Areas within the U.S.-Mexico Borderlands
Dohrenwend, John C.; Gray, Floyd; Miller, Robert J.
2000-01-01
The study is summarized in the Adobe Acrobat Portable Document Format (PDF) file OF00-309.PDF. This publication also contain satellite full-scene images of selected areas along the U.S.-Mexico border. These images are presented as high-resolution images in jpeg format (IMAGES). The folder LOCATIONS in contains TIFF images showing exact positions of easily-identified reference locations for each of the Landsat TM scenes located at least partly within the U.S. A reference location table (BDRLOCS.DOC in MS Word format) lists the latitude and longitude of each reference location with a nominal precision of 0.001 minute of arc
2015-12-24
Signal to Noise Ratio SPICE Simulation Program with Integrated Circuit Emphasis TIFF Tagged Image File Format USC University of Southern California xvii...sources can create errors in digital circuits. These effects can be simulated using Simulation Program with Integrated Circuit Emphasis ( SPICE ) or...compute summary statistics. 4.1 Circuit Simulations Noisy analog circuits can be simulated in SPICE or Cadence SpectreTM software via noisy voltage
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
NASA Astrophysics Data System (ADS)
Paget, A. C.; Brodzik, M. J.; Long, D. G.; Hardman, M.
2016-02-01
The historical record of satellite-derived passive microwave brightness temperatures comprises data from multiple imaging radiometers (SMMR, SSM/I-SSMIS, AMSR-E), spanning nearly 40 years of Earth observations from 1978 to the present. Passive microwave data are used to monitor time series of many climatological variables, including ocean wind speeds, cloud liquid water and sea ice concentrations and ice velocity. Gridded versions of passive microwave data have been produced using various map projections (polar stereographic, Lambert azimuthal equal-area, cylindrical equal-area, quarter-degree Platte-Carree) and data formats (flat binary, HDF). However, none of the currently available versions can be rendered in the common visualization standard, geoTIFF, without requiring cartographic reprojection. Furthermore, the reprojection details are complicated and often require expert knowledge of obscure software package options. We are producing a consistently calibrated, completely reprocessed data set of this valuable multi-sensor satellite record, using EASE-Grid 2.0, an improved equal-area projection definition that will require no reprojection for translation into geoTIFF. Our approach has been twofold: 1) define the projection ellipsoid to match the reference datum of the satellite data, and 2) include required file-level metadata for standard projection software to correctly render the data in the geoTIFF standard. The Calibrated, Enhanced Resolution Brightness Temperature (CETB) Earth System Data Record (ESDR), leverages image reconstruction techniques to enhance gridded spatial resolution to 3 km and uses newly available intersensor calibrations to improve the quality of derived geophysical products. We expect that our attention to easy geoTIFF compatibility will foster higher-quality analysis with the CETB product by enabling easy and correct intercomparison with other gridded and in situ data.
Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.
2009-01-01
In June of 1990 and July of 1991, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Mississippi-Alabama-Florida shelf in the northern Gulf of Mexico, from Mississippi Sound to the Florida Panhandle. Work was done onboard the Mississippi Mineral Resources Institute R/V Kit Jones as part of a project to study coastal erosion and offshore sand resources. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.
Jones, William R.; Garber, Adrienne
2013-01-01
The Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) funds over 100 wetland restoration projects across Louisiana. Integral to the success of CWPPRA is its long-term monitoring program, which enables State and Federal agencies to determine the effectiveness of each restoration effort. One component of this monitoring program is the classification of high-resolution, color-infrared aerial photography at the U.S. Geological Survey’s National Wetlands Research Center in Lafayette, Louisiana. Color-infrared aerial photography (9- by 9-inch) is obtained before project construction and several times after construction. Each frame is scanned on a photogrametric scanner that produces a high-resolution image in Tagged Image File Format (TIFF). By using image-processing software, these TIFF files are then orthorectified and mosaicked to produce a seamless image of a project area and its associated reference area (a control site near the project that has common environmental features, such as marsh type, soil types, and water salinities.) The project and reference areas are then classified according to pixel value into two distinct classes, land and water. After initial land and water ratios have been established by using photography obtained before and after project construction, subsequent comparisons can be made over time to determine land-water change.
Scanning technology selection impacts acceptability and usefulness of image-rich content.
Alpi, Kristine M; Brown, James C; Neel, Jennifer A; Grindem, Carol B; Linder, Keith E; Harper, James B
2016-01-01
Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities.
Harvey, Craig A.; Kolpin, Dana W.; Battaglin, William A.
1996-01-01
A geographic information system (GIS) procedure was developed to compile low-altitude aerial photography, digitized data, and land-use data from U.S. Department of Agriculture Consolidated Farm Service Agency (CFSA) offices into a high-resolution (approximately 5 meters) land-use GIS data set. The aerial photography consisted of 35-mm slides which were scanned into tagged information file format (TIFF) images. These TIFF images were then imported into the GIS where they were registered into a geographically referenced coordinate system. Boundaries between land use were delineated from these GIS data sets using on-screen digitizing techniques. Crop types were determined using information obtained from the U.S. Department of Agriculture CFSA offices. Crop information not supplied by the CFSA was attributed by manual classification procedures. Automated methods to provide delineation of the field boundaries and land-use classification were investigated. It was determined that using these data sources, automated methods were less efficient and accurate than manual methods of delineating field boundaries and classifying land use.
Proposed color workflow solution from mobile and website to printing
NASA Astrophysics Data System (ADS)
Qiao, Mu; Wyse, Terry
2015-03-01
With the recent introduction of mobile devices and development in client side application technologies, there is an explosion of the parameter matrix for color management: hardware platform (computer vs. mobile), operating system (Windows, Mac OS, Android, iOS), client application (Flesh, IE, Firefox, Safari, Chrome), and file format (JPEG, TIFF, PDF of various versions). In a modern digital print shop, multiple print solutions are used: digital presses, wide format inkjet, dye sublimation inkjet are used to produce a wide variety of customizable products from photo book, personalized greeting card, canvas, mobile phone case and more. In this paper, we outline a strategy spans from client side application, print file construction, to color setup on printer to manage consistency and also achieve what-you-see-is-what-you-get for customers who are using a wide variety of technologies in viewing and ordering product.
NASA Astrophysics Data System (ADS)
Dunham, G.; Harding, E. C.; Loisel, G. P.; Lake, P. W.; Nielsen-Weber, L. B.
2016-11-01
Fuji TR image plate is frequently used as a replacement detector medium for x-ray imaging and spectroscopy diagnostics at NIF, Omega, and Z facilities. However, the familiar Fuji BAS line of image plate scanners is no longer supported by the industry, and so a replacement scanning system is needed. While the General Electric Typhoon line of scanners could replace the Fuji systems, the shift away from photo stimulated luminescence units to 16-bit grayscale Tag Image File Format (TIFF) leaves a discontinuity when comparing data collected from both systems. For the purposes of quantitative spectroscopy, a known unit of intensity applied to the grayscale values of the TIFF is needed. The DITABIS Super Micron image plate scanning system was tested and shown to potentially rival the resolution and dynamic range of Kodak RAR 2492 x-ray film. However, the absolute sensitivity of the scanner is unknown. In this work, a methodology to cross calibrate Fuji TR image plate and the absolutely calibrated Kodak RAR 2492 x-ray film is presented. Details of the experimental configurations used are included. An energy dependent scale factor to convert Fuji TR IP scanned on a DITABIS Super Micron scanner from 16-bit grayscale TIFF to intensity units (i.e., photons per square micron) is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunham, G., E-mail: gsdunha@sandia.gov; Harding, E. C.; Loisel, G. P.
Fuji TR image plate is frequently used as a replacement detector medium for x-ray imaging and spectroscopy diagnostics at NIF, Omega, and Z facilities. However, the familiar Fuji BAS line of image plate scanners is no longer supported by the industry, and so a replacement scanning system is needed. While the General Electric Typhoon line of scanners could replace the Fuji systems, the shift away from photo stimulated luminescence units to 16-bit grayscale Tag Image File Format (TIFF) leaves a discontinuity when comparing data collected from both systems. For the purposes of quantitative spectroscopy, a known unit of intensity appliedmore » to the grayscale values of the TIFF is needed. The DITABIS Super Micron image plate scanning system was tested and shown to potentially rival the resolution and dynamic range of Kodak RAR 2492 x-ray film. However, the absolute sensitivity of the scanner is unknown. In this work, a methodology to cross calibrate Fuji TR image plate and the absolutely calibrated Kodak RAR 2492 x-ray film is presented. Details of the experimental configurations used are included. An energy dependent scale factor to convert Fuji TR IP scanned on a DITABIS Super Micron scanner from 16-bit grayscale TIFF to intensity units (i.e., photons per square micron) is discussed.« less
Enhanced Historical Land-Use and Land-Cover Data Sets of the U.S. Geological Survey
Price, Curtis V.; Nakagaki, Naomi; Hitt, Kerie J.; Clawges, Rick M.
2007-01-01
Historical land-use and land-cover data, available from the U.S. Geological Survey (USGS) for the conterminous United States and Hawaii, have been enhanced for use in geographic information systems (GIS) applications. The original digital data sets were created by the USGS in the late 1970s and early 1980s and were later converted by USGS and the U.S. Environmental Protection Agency (USEPA) to a geographic information system (GIS) format in the early 1990s. These data were made available on USEPA's Web site since the early 1990s and have been used for many national applications, despite minor coding and topological errors. During the 1990s, a group of USGS researchers made modifications to the data set for use in the National Water-Quality Assessment Program. These edited files have been further modified to create a more accurate, topologically clean, and seamless national data set. Several different methods, including custom editing software and several batch processes, were applied to create this enhanced version of the national data set. The data sets are included in this report in the commonly used shapefile and Tagged Image Format File (TIFF) formats. In addition, this report includes two polygon data sets (in shapefile format) representing (1) land-use and land-cover source documentation extracted from the previously published USGS data files, and (2) the extent of each polygon data file.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillion, D.
This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formats used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved.« less
Scanning technology selection impacts acceptability and usefulness of image-rich content*†
Alpi, Kristine M.; Brown, James C.; Neel, Jennifer A.; Grindem, Carol B.; Linder, Keith E.; Harper, James B.
2016-01-01
Objective Clinical and research usefulness of articles can depend on image quality. This study addressed whether scans of figures in black and white (B&W), grayscale, or color, or portable document format (PDF) to tagged image file format (TIFF) conversions as provided by interlibrary loan or document delivery were viewed as acceptable or useful by radiologists or pathologists. Methods Residency coordinators selected eighteen figures from studies from radiology, clinical pathology, and anatomic pathology journals. With original PDF controls, each figure was prepared in three or four experimental conditions: PDF conversion to TIFF, and scans from print in B&W, grayscale, and color. Twelve independent observers indicated whether they could identify the features and whether the image quality was acceptable. They also ranked all the experimental conditions of each figure in terms of usefulness. Results Of 982 assessments of 87 anatomic pathology, 83 clinical pathology, and 77 radiology images, 471 (48%) were unidentifiable. Unidentifiability of originals (4%) and conversions (10%) was low. For scans, unidentifiability ranged from 53% for color, to 74% for grayscale, to 97% for B&W. Of 987 responses about acceptability (n=405), 41% were said to be unacceptable, 97% of B&W, 66% of grayscale, 41% of color, and 1% of conversions. Hypothesized order (original, conversion, color, grayscale, B&W) matched 67% of rankings (n=215). Conclusions PDF to TIFF conversion provided acceptable content. Color images are rarely useful in grayscale (12%) or B&W (less than 1%). Acceptability of grayscale scans of noncolor originals was 52%. Digital originals are needed for most images. Print images in color or grayscale should be scanned using those modalities. PMID:26807048
Prototype of Partial Cutting Tool of Geological Map Images Distributed by Geological Web Map Service
NASA Astrophysics Data System (ADS)
Nonogaki, S.; Nemoto, T.
2014-12-01
Geological maps and topographical maps play an important role in disaster assessment, resource management, and environmental preservation. These map information have been distributed in accordance with Web services standards such as Web Map Service (WMS) and Web Map Tile Service (WMTS) recently. In this study, a partial cutting tool of geological map images distributed by geological WMTS was implemented with Free and Open Source Software. The tool mainly consists of two functions: display function and cutting function. The former function was implemented using OpenLayers. The latter function was implemented using Geospatial Data Abstraction Library (GDAL). All other small functions were implemented by PHP and Python. As a result, this tool allows not only displaying WMTS layer on web browser but also generating a geological map image of intended area and zoom level. At this moment, available WTMS layers are limited to the ones distributed by WMTS for the Seamless Digital Geological Map of Japan. The geological map image can be saved as GeoTIFF format and WebGL format. GeoTIFF is one of the georeferenced raster formats that is available in many kinds of Geographical Information System. WebGL is useful for confirming a relationship between geology and geography in 3D. In conclusion, the partial cutting tool developed in this study would contribute to create better conditions for promoting utilization of geological information. Future work is to increase the number of available WMTS layers and the types of output file format.
Dwyer, John L.; Schmidt, Gail L.; Qu, J.J.; Gao, W.; Kafatos, M.; Murphy , R.E.; Salomonson, V.V.
2006-01-01
The MODIS Reprojection Tool (MRT) is designed to help individuals work with MODIS Level-2G, Level-3, and Level-4 land data products. These products are referenced to a global tiling scheme in which each tile is approximately 10° latitude by 10° longitude and non-overlapping (Fig. 9.1). If desired, the user may reproject only selected portions of the product (spatial or parameter subsetting). The software may also be used to convert MODIS products to file formats (generic binary and GeoTIFF) that are more readily compatible with existing software packages. The MODIS land products distributed by the Land Processes Distributed Active Archive Center (LP DAAC) are in the Hierarchical Data Format - Earth Observing System (HDF-EOS), developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign for the NASA EOS Program. Each HDF-EOS file is comprised of one or more science data sets (SDSs) corresponding to geophysical or biophysical parameters. Metadata are embedded in the HDF file as well as contained in a .met file that is associated with each HDF-EOS file. The MRT supports 8-bit, 16-bit, and 32-bit integer data (both signed and unsigned), as well as 32-bit float data. The data type of the output is the same as the data type of each corresponding input SDS.
Analyzing huge pathology images with open source software.
Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc
2013-06-06
Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.
Analyzing huge pathology images with open source software
2013-01-01
Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479
The Open Microscopy Environment: open image informatics for the biological sciences
NASA Astrophysics Data System (ADS)
Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.
2016-07-01
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
Harrison, Arnell S.; Dadisman, Shawn V.; Kindinger, Jack G.; Morton, Robert A.; Blum, Mike D.; Wiese, Dana S.; Subiño, Janice A.
2007-01-01
In June of 1996, the U.S. Geological Survey conducted geophysical surveys from Nueces to Copano Bays, Texas. This report serves as an archive of unprocessed digital boomer seismic reflection data, trackline maps, navigation files, GIS information, cruise log, and formal FGDC metadata. Filtered and gained digital images of the seismic profiles and high resolution scanned TIFF images of the original paper printouts are also provided. The archived trace data are in standard Society of Exploration Geophysicists (SEG) SEG-Y format (Barry and others, 1975) and may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU). Example SU processing scripts and USGS software for viewing the SEG-Y files (Zihlman, 1992) are also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillion, D.
This code enables one to display, take line-outs on, and perform various transformations on an image created by an array of integer*2 data. Uncompressed eight-bit TIFF files created on either the Macintosh or the IBM PC may also be read in and converted to a 16 bit signed integer image. This code is designed to handle all the formates used for PDS (photo-densitometer) files at the Lawrence Livermore National Laboratory. These formats are all explained by the application code. The image may be zoomed infinitely and the gray scale mapping can be easily changed. Line-outs may be horizontal or verticalmore » with arbitrary width, angled with arbitrary end points, or taken along any path. This code is usually used to examine spectrograph data. Spectral lines may be identified and a polynomial fit from position to wavelength may be found. The image array can be remapped so that the pixels all have the same change of lambda width. It is not necessary to do this, however. Lineouts may be printed, saved as Cricket tab-delimited files, or saved as PICT2 files. The plots may be linear, semilog, or logarithmic with nice values and proper scientific notation. Typically, spectral lines are curved. By identifying points on these lines and fitting their shapes by polyn.« less
A seamless, high-resolution digital elevation model (DEM) of the north-central California coast
Foxgrover, Amy C.; Barnard, Patrick L.
2012-01-01
A seamless, 2-meter resolution digital elevation model (DEM) of the north-central California coast has been created from the most recent high-resolution bathymetric and topographic datasets available. The DEM extends approximately 150 kilometers along the California coastline, from Half Moon Bay north to Bodega Head. Coverage extends inland to an elevation of +20 meters and offshore to at least the 3 nautical mile limit of state waters. This report describes the procedures of DEM construction, details the input data sources, and provides the DEM for download in both ESRI Arc ASCII and GeoTIFF file formats with accompanying metadata.
NASA Technical Reports Server (NTRS)
Godfrey, Gary S.
2003-01-01
This project illustrates an animation of the orbiter mate to the external tank, an animation of the OMS POD installation to the orbiter, and a simulation of the landing gear mechanism at the Kennedy Space Center. A detailed storyboard was created to reflect each animation or simulation. Solid models were collected and translated into Pro/Engineer's prt and asm formats. These solid models included computer files of the: orbiter, external tank, solid rocket booster, mobile launch platform, transporter, vehicle assembly building, OMS POD fixture, and landing gear. A depository of the above solid models was established. These solid models were translated into several formats. This depository contained the following files: stl for sterolithography, stp for neutral file work, shrinkwrap for compression, tiff for photoshop work, jpeg for Internet use, and prt and asm for Pro/Engineer use. Solid models were created of the material handling sling, bay 3 platforms, and orbiter contact points. Animations were developed using mechanisms to reflect each storyboard. Every effort was made to build all models technically correct for engineering use. The result was an animated routine that could be used by NASA for training material handlers and uncovering engineering safety issues.
López, Carlos; Lejeune, Marylène; Escrivà, Patricia; Bosch, Ramón; Salvadó, Maria Teresa; Pons, Lluis E.; Baucells, Jordi; Cugat, Xavier; Álvaro, Tomás; Jaén, Joaquín
2008-01-01
This study investigates the effects of digital image compression on automatic quantification of immunohistochemical nuclear markers. We examined 188 images with a previously validated computer-assisted analysis system. A first group was composed of 47 images captured in TIFF format, and other three contained the same images converted from TIFF to JPEG format with 3×, 23× and 46× compression. Counts of TIFF format images were compared with the other three groups. Overall, differences in the count of the images increased with the percentage of compression. Low-complexity images (≤100 cells/field, without clusters or with small-area clusters) had small differences (<5 cells/field in 95–100% of cases) and high-complexity images showed substantial differences (<35–50 cells/field in 95–100% of cases). Compression does not compromise the accuracy of immunohistochemical nuclear marker counts obtained by computer-assisted analysis systems for digital images with low complexity and could be an efficient method for storing these images. PMID:18755997
Brabb, Earl E.; Colgan, Joseph P.; Best, Timothy C.
2000-01-01
Introduction Debris flows, debris avalanches, mud flows and lahars are fast-moving landslides that occur in a wide variety of environments throughout the world. They are particularly dangerous to life and property because they move quickly, destroy objects in their paths, and often strike without warning. This map represents a significant effort to compile the locations of known debris flows in United Stated and predict where future flows might occur. The files 'dfipoint.e00' and 'dfipoly.e00' contain the locations of over 6600 debris flows from published and unpublished sources. The locations are referenced by numbers that correspond to entries in a bibliography, which is part of the pamphlet 'mf2329pamphlet.pdf'. The areas of possible future debris flows are shown in the file 'susceptibility.tif', which is a georeferenced TIFF file that can be opened in an image editing program or imported into a GIS system like ARC/INFO. All other databases are in ARC/INFO export (.e00) format.
Rea, A.H.; Becker, C.J.
1997-01-01
This compact disc contains 25 digital map data sets covering the State of Oklahoma that may be of interest to the general public, private industry, schools, and government agencies. Fourteen data sets are statewide. These data sets include: administrative boundaries; 104th U.S. Congressional district boundaries; county boundaries; latitudinal lines; longitudinal lines; geographic names; indexes of U.S. Geological Survey 1:100,000, and 1:250,000-scale topographic quadrangles; a shaded-relief image; Oklahoma State House of Representatives district boundaries; Oklahoma State Senate district boundaries; locations of U.S. Geological Survey stream gages; watershed boundaries and hydrologic cataloging unit numbers; and locations of weather stations. Eleven data sets are divided by county and are located in 77 county subdirectories. These data sets include: census block group boundaries with selected demographic data; city and major highways text; geographic names; land surface elevation contours; elevation points; an index of U.S. Geological Survey 1:24,000-scale topographic quadrangles; roads, streets and address ranges; highway text; school district boundaries; streams, river and lakes; and the public land survey system. All data sets are provided in a readily accessible format. Most data sets are provided in Digital Line Graph (DLG) format. The attributes for many of the DLG files are stored in related dBASE(R)-format files and may be joined to the data set polygon attribute or arc attribute tables using dBASE(R)-compatible software. (Any use of trade names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. Government.) Point attribute tables are provided in dBASE(R) format only, and include the X and Y map coordinates of each point. Annotation (text plotted in map coordinates) are provided in AutoCAD Drawing Exchange format (DXF) files. The shaded-relief image is provided in TIFF format. All data sets except the shaded-relief image also are provided in ARC/INFO export-file format.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Jones, William R.; Garber, Adrienne
2012-01-01
The Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) funds over 100 wetland restoration projects across Louisiana. Integral to the success of CWPPRA is its long-term monitoring program, which enables State and Federal agencies to determine the effectiveness of each restoration effort. One component of this monitoring program is the analysis of high-resolution, color-infrared aerial photography at the U.S. Geological Survey's National Wetlands Research Center in Lafayette, Louisiana. Color-infrared aerial photography (9- by 9-inch) is obtained before project construction and several times after construction. Each frame is scanned on a photogrametric scanner that produces a high-resolution image in Tagged Image File Format (TIFF). By using image-processing software, these TIFF files are then orthorectified and mosaicked to produce a seamless image of a project area and its associated reference area (a control site near the project that has common environmental features, such as marsh type, soil types, and water salinities.) The project and reference areas are then classified according to pixel value into two distinct classes, land and water. After initial land and water ratios have been established by using photography obtained before and after project construction, subsequent comparisons can be made over time to determine land-water change. Several challenges are associated with the land-water interpretation process. Primarily, land-water classifications are often complicated by the presence of floating aquatic vegetation that occurs throughout the freshwater systems of coastal Louisiana and that is sometimes difficult to differentiate from emergent marsh. Other challenges include tidal fluctuations and water movement from strong winds, which may result in flooding and inundation of emergent marsh during certain conditions. Compensating for these events is difficult but possible by using other sources of imagery to verify marsh conditions for other dates in time.
Tularosa Basin Play Fairway Analysis: Weights of Evidence; Mineralogy, and Temperature Anomaly Maps
Adam Brandt
2015-11-15
This submission has two shapefiles and a tiff image. The weights of evidence analysis was applied to data representing heat of the earth and fracture permeability using training sites around the Southwest; this is shown in the tiff image. A shapefile of surface temperature anomalies was derived from the statistical analysis of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) thermal infrared data which had been converted to surface temperatures; these anomalies have not been field checked. The second shapefile shows outcrop mineralogy which originally mapped by the New Mexico Bureau of Geology and Mineral Resources, and supplemented with mineralogic information related to rock fracability risk for EGS. Further metadata can be found within each file.
LVFS: A Big Data File Storage Bridge for the HPC Community
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Mauoka, E.; Fonseca, L. F.
2015-12-01
Merging Big Data capabilities into High Performance Computing architecture starts at the file storage level. Heterogeneous storage systems are emerging which offer enhanced features for dealing with Big Data such as the IBM GPFS storage system's integration into Hadoop Map-Reduce. Taking advantage of these capabilities requires file storage systems to be adaptive and accommodate these new storage technologies. We present the extension of the Lightweight Virtual File System (LVFS) currently running as the production system for the MODIS Level 1 and Atmosphere Archive and Distribution System (LAADS) to incorporate a flexible plugin architecture which allows easy integration of new HPC hardware and/or software storage technologies without disrupting workflows, system architectures and only minimal impact on existing tools. We consider two essential aspects provided by the LVFS plugin architecture needed for the future HPC community. First, it allows for the seamless integration of new and emerging hardware technologies which are significantly different than existing technologies such as Segate's Kinetic disks and Intel's 3DXPoint non-volatile storage. Second is the transparent and instantaneous conversion between new software technologies and various file formats. With most current storage system a switch in file format would require costly reprocessing and nearly doubling of storage requirements. We will install LVFS on UMBC's IBM iDataPlex cluster with a heterogeneous storage architecture utilizing local, remote, and Seagate Kinetic storage as a case study. LVFS merges different kinds of storage architectures to show users a uniform layout and, therefore, prevent any disruption in workflows, architecture design, or tool usage. We will show how LVFS will convert HDF data produced by applying machine learning algorithms to Xco2 Level 2 data from the OCO-2 satellite to produce CO2 surface fluxes into GeoTIFF for visualization.
Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.
2009-01-01
In April and July of 1981, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework of the Alabama-Mississippi-Louisiana Shelf in the northern Gulf of Mexico. Work was conducted onboard the Texas A&M University R/V Carancahua and the R/V Gyre to develop a geologic understanding of the study area and to locate potential hazards related to offshore oil and gas production. While the R/V Carancahua only collected boomer data, the R/V Gyre used a 400-Joule minisparker, 3.5-kilohertz (kHz) subbottom profiler, 12-kHz precision depth recorder, and two air guns. The authors selected the minisparker data set because, unlike with the boomer data, it provided the most complete record. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). This report serves as an archive of high-resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer and minisparker paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata.
Wong, Florence L.; Grim, Muriel S.
2015-01-01
Contours and derivative raster files of depth-to-basement, sediment-thickness, and bathymetry data for the area offshore of Washington, Oregon, and California are provided here as GIS-ready shapefiles and GeoTIFF files. The data were used to generate paper maps in 1992 and 1993 from 1984 surveys of the U.S. Exclusive Economic Zone by the U.S. Geological Survey for depth to basement and sediment thickness, and from older data for the bathymetry.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Image editing with Adobe Photoshop 6.0.
Caruso, Ronald D; Postel, Gregory C
2002-01-01
The authors introduce Photoshop 6.0 for radiologists and demonstrate basic techniques of editing gray-scale cross-sectional images intended for publication and for incorporation into computerized presentations. For basic editing of gray-scale cross-sectional images, the Tools palette and the History/Actions palette pair should be displayed. The History palette may be used to undo a step or series of steps. The Actions palette is a menu of user-defined macros that save time by automating an action or series of actions. Converting an image to 8-bit gray scale is the first editing function. Cropping is the next action. Both decrease file size. Use of the smallest file size necessary for the purpose at hand is recommended. Final file size for gray-scale cross-sectional neuroradiologic images (8-bit, single-layer TIFF [tagged image file format] at 300 pixels per inch) intended for publication varies from about 700 Kbytes to 3 Mbytes. Final file size for incorporation into computerized presentations is about 10-100 Kbytes (8-bit, single-layer, gray-scale, high-quality JPEG [Joint Photographic Experts Group]), depending on source and intended use. Editing and annotating images before they are inserted into presentation software is highly recommended, both for convenience and flexibility. Radiologists should find that image editing can be carried out very rapidly once the basic steps are learned and automated. Copyright RSNA, 2002
NASA Astrophysics Data System (ADS)
Bruno, L. S.; Rodrigo, B. P.; Lucio, A. de C. Jorge
2016-10-01
This paper presents a system developed by an application of a neural network Multilayer Perceptron for drone acquired agricultural image segmentation. This application allows a supervised user training the classes that will posteriorly be interpreted by neural network. These classes will be generated manually with pre-selected attributes in the application. After the attribute selection a segmentation process is made to allow the relevant information extraction for different types of images, RGB or Hyperspectral. The application allows extracting the geographical coordinates from the image metadata, geo referencing all pixels on the image. In spite of excessive memory consume on hyperspectral images regions of interest, is possible to perform segmentation, using bands chosen by user that can be combined in different ways to obtain different results.
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
Java Image I/O for VICAR, PDS, and ISIS
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Levoe, Steven R.
2011-01-01
This library, written in Java, supports input and output of images and metadata (labels) in the VICAR, PDS image, and ISIS-2 and ISIS-3 file formats. Three levels of access exist. The first level comprises the low-level, direct access to the file. This allows an application to read and write specific image tiles, lines, or pixels and to manipulate the label data directly. This layer is analogous to the C-language "VICAR Run-Time Library" (RTL), which is the image I/O library for the (C/C++/Fortran) VICAR image processing system from JPL MIPL (Multimission Image Processing Lab). This low-level library can also be used to read and write labeled, uncompressed images stored in formats similar to VICAR, such as ISIS-2 and -3, and a subset of PDS (image format). The second level of access involves two codecs based on Java Advanced Imaging (JAI) to provide access to VICAR and PDS images in a file-format-independent manner. JAI is supplied by Sun Microsystems as an extension to desktop Java, and has a number of codecs for formats such as GIF, TIFF, JPEG, etc. Although Sun has deprecated the codec mechanism (replaced by IIO), it is still used in many places. The VICAR and PDS codecs allow any program written using the JAI codec spec to use VICAR or PDS images automatically, with no specific knowledge of the VICAR or PDS formats. Support for metadata (labels) is included, but is format-dependent. The PDS codec, when processing PDS images with an embedded VIAR label ("dual-labeled images," such as used for MER), presents the VICAR label in a new way that is compatible with the VICAR codec. The third level of access involves VICAR, PDS, and ISIS Image I/O plugins. The Java core includes an "Image I/O" (IIO) package that is similar in concept to the JAI codec, but is newer and more capable. Applications written to the IIO specification can use any image format for which a plug-in exists, with no specific knowledge of the format itself.
Polar2Grid 2.0: Reprojecting Satellite Data Made Easy
NASA Astrophysics Data System (ADS)
Hoese, D.; Strabala, K.
2015-12-01
Polar-orbiting multi-band meteorological sensors such as those on the Suomi National Polar-orbiting Partnership (SNPP) satellite pose substantial challenges for taking imagery the last mile to forecast offices, scientific analysis environments, and the general public. To do this quickly and easily, the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin has created an open-source, modular application system, Polar2Grid. This bundled solution automates tools for converting various satellite products like those from VIIRS and MODIS into a variety of output formats, including GeoTIFFs, AWIPS compatible NetCDF files, and NinJo forecasting workstation compatible TIFF images. In addition to traditional visible and infrared imagery, Polar2Grid includes three perceptual enhancements for the VIIRS Day-Night Band (DNB), as well as providing the capability to create sharpened true color, sharpened false color, and user-defined RGB images. Polar2Grid performs conversions and projections in seconds on large swaths of data. Polar2Grid is currently providing VIIRS imagery over the Continental United States, as well as Alaska and Hawaii, from various Direct-Broadcast antennas to operational forecasters at the NOAA National Weather Service (NWS) offices in their AWIPS terminals, within minutes of an overpass of the Suomi NPP satellite. Three years after Polar2Grid development started, the Polar2Grid team is now releasing version 2.0 of the software; supporting more sensors, generating more products, and providing all of its features in an easy to use command line interface.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.
Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.
2015-01-01
We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569
Data publication and sharing using the SciDrive service
NASA Astrophysics Data System (ADS)
Mishin, Dmitry; Medvedev, D.; Szalay, A. S.; Plante, R. L.
2014-01-01
Despite the last years progress in scientific data storage, still remains the problem of public data storage and sharing system for relatively small scientific datasets. These are collections forming the “long tail” of power log datasets distribution. The aggregated size of the long tail data is comparable to the size of all data collections from large archives, and the value of data is significant. The SciDrive project's main goal is providing the scientific community with a place to reliably and freely store such data and provide access to it to broad scientific community. The primary target audience of the project is astoromy community, and it will be extended to other fields. We're aiming to create a simple way of publishing a dataset, which can be then shared with other people. Data owner controls the permissions to modify and access the data and can assign a group of users or open the access to everyone. The data contained in the dataset will be automaticaly recognized by a background process. Known data formats will be extracted according to the user's settings. Currently tabular data can be automatically extracted to the user's MyDB table where user can make SQL queries to the dataset and merge it with other public CasJobs resources. Other data formats can be processed using a set of plugins that upload the data or metadata to user-defined side services. The current implementation targets some of the data formats commonly used by the astronomy communities, including FITS, ASCII and Excel tables, TIFF images, and YT simulations data archives. Along with generic metadata, format-specific metadata is also processed. For example, basic information about celestial objects is extracted from FITS files and TIFF images, if present. A 100TB implementation has just been put into production at Johns Hopkins University. The system features public data storage REST service supporting VOSpace 2.0 and Dropbox protocols, HTML5 web portal, command-line client and Java standalone client to synchronize a local folder with the remote storage. We use VAO SSO (Single Sign On) service from NCSA for users authentication that provides free registration for everyone.
PredictABEL: an R package for the assessment of risk prediction models.
Kundu, Suman; Aulchenko, Yurii S; van Duijn, Cornelia M; Janssens, A Cecile J W
2011-04-01
The rapid identification of genetic markers for multifactorial diseases from genome-wide association studies is fuelling interest in investigating the predictive ability and health care utility of genetic risk models. Various measures are available for the assessment of risk prediction models, each addressing a different aspect of performance and utility. We developed PredictABEL, a package in R that covers descriptive tables, measures and figures that are used in the analysis of risk prediction studies such as measures of model fit, predictive ability and clinical utility, and risk distributions, calibration plot and the receiver operating characteristic plot. Tables and figures are saved as separate files in a user-specified format, which include publication-quality EPS and TIFF formats. All figures are available in a ready-made layout, but they can be customized to the preferences of the user. The package has been developed for the analysis of genetic risk prediction studies, but can also be used for studies that only include non-genetic risk factors. PredictABEL is freely available at the websites of GenABEL ( http://www.genabel.org ) and CRAN ( http://cran.r-project.org/).
NASA Astrophysics Data System (ADS)
Rivers, M. L.; Gualda, G. A.
2009-05-01
One of the challenges in tomography is the availability of suitable software for image processing and analysis in 3D. We present here 'tomo_display' and 'vol_tools', two packages created in IDL that enable reconstruction, processing, and visualization of tomographic data. They complement in many ways the capabilities offered by Blob3D (Ketcham 2005 - Geosphere, 1: 32-41, DOI: 10.1130/GES00001.1) and, in combination, allow users without programming knowledge to perform all steps necessary to obtain qualitative and quantitative information using tomographic data. The package 'tomo_display' was created and is maintained by Mark Rivers. It allows the user to: (1) preprocess and reconstruct parallel beam tomographic data, including removal of anomalous pixels, ring artifact reduction, and automated determination of the rotation center, (2) visualization of both raw and reconstructed data, either as individual frames, or as a series of sequential frames. The package 'vol_tools' consists of a series of small programs created and maintained by Guilherme Gualda to perform specific tasks not included in other packages. Existing modules include simple tools for cropping volumes, generating histograms of intensity, sample volume measurement (useful for porous samples like pumice), and computation of volume differences (for differential absorption tomography). The module 'vol_animate' can be used to generate 3D animations using rendered isosurfaces around objects. Both packages use the same NetCDF format '.volume' files created using code written by Mark Rivers. Currently, only 16-bit integer volumes are created and read by the packages, but floating point and 8-bit data can easily be stored in the NetCDF format as well. A simple GUI to convert sequences of tiffs into '.volume' files is available within 'vol_tools'. Both 'tomo_display' and 'vol_tools' include options to (1) generate onscreen output that allows for dynamic visualization in 3D, (2) save sequences of tiffs to disk, and (3) generate MPEG movies for inclusion in presentations, publications, websites, etc. Both are freely available as run-time ('.sav') versions that can be run using the free IDL Virtual Machine TM, available from ITT Visual Information Solutions: http://www.ittvis.com/ProductServices/IDL/VirtualMachine.aspx The run-time versions of 'tomo_display' and 'vol_tools' can be downloaded from: http://cars.uchicago.edu/software/idl/tomography.html http://sites.google.com/site/voltools/
Naval Research Lab Review 1999
1999-01-01
Center offers high-quality out- put from computer-generated files in EPS, Postscript, PICT, TIFF, Photoshop , and PowerPoint. Photo- graphic-quality color...767-3200 (228) 688-3390 (831) 656-4731 (410) 257-4000 DSN 297- or 754- 485 878 — Direct- in -Dialing 767- or 404- 688 656 257 Public Affairs (202) 767...research described in this NRL Review can be obtained from the Public Affairs Office, Code 1230, (202) 767-2541. Information concerning Technology
Implementing the HDF-EOS5 software library for data products in the UNAVCO InSAR archive
NASA Astrophysics Data System (ADS)
Baker, Scott; Meertens, Charles; Crosby, Christopher
2017-04-01
UNAVCO is a non-profit university-governed consortium that operates the U.S. National Science Foundation (NSF) Geodesy Advancing Geosciences and EarthScope (GAGE) facility and provides operational support to the Western North America InSAR Consortium (WInSAR). The seamless synthetic aperture radar archive (SSARA) is a seamless distributed access system for SAR data and higher-level data products. Under the NASA-funded SSARA project, a user-contributed InSAR archive for interferograms, time series, and other derived data products was developed at UNAVCO. The InSAR archive development has led to the adoption of the HDF-EOS5 data model, file format, and library. The HDF-EOS software library was designed to support NASA Earth Observation System (EOS) science data products and provides data structures for radar geometry (Swath) and geocoded (Grid) data based on the HDF5 data model and file format provided by the HDF Group. HDF-EOS5 inherits the benefits of HDF5 (open-source software support, internal compression, portability, support for structural data, self-describing file metadata enhanced performance, and xml support) and provides a way to standardize InSAR data products. Instrument- and datatype-independent services, such as subsetting, can be applied to files across a wide variety of data products through the same library interface. The library allows integration with GIS software packages such as ArcGIS and GDAL, conversion to other data formats like NetCDF and GeoTIFF, and is extensible with new data structures to support future requirements. UNAVCO maintains a GitHub repository that provides example software for creating data products from popular InSAR processing software packages like GMT5SAR and ISCE as well as examples for reading and converting the data products into other formats. Digital object identifiers (DOI) have been incorporated into the InSAR archive allowing users to assign a permanent location for their processed result and easily reference the final data products. A metadata attribute is added to the HDF-EOS5 file when a DOI is minted for a data product. These data products are searchable through the SSARA federated query providing access to processed data for both expert and non-expert InSAR users. The archive facilitates timely distribution of processed data with particular importance for geohazards and event response.
Kato, A; Ohno, N
2009-03-01
The study of dental morphology is essential in terms of phylogeny. Advances in three-dimensional (3D) measurement devices have enabled us to make 3D images of teeth without destruction of samples. However, raw fundamental data on tooth shape requires complex equipment and techniques. An online database of 3D teeth models is therefore indispensable. We aimed to explore the basic methodology for constructing 3D teeth models, with application for data sharing. Geometric information on the human permanent upper left incisor was obtained using micro-computed tomography (micro-CT). Enamel, dentine, and pulp were segmented by thresholding of different gray-scale intensities. Segmented data were separately exported in STereo-Lithography Interface Format (STL). STL data were converted to Wavefront OBJ (OBJect), as many 3D computer graphics programs support the Wavefront OBJ format. Data were also applied to Quick Time Virtual Reality (QTVR) format, which allows the image to be viewed from any direction. In addition to Wavefront OBJ and QTVR data, the original CT series were provided as 16-bit Tag Image File Format (TIFF) images on the website. In conclusion, 3D teeth models were constructed in general-purpose data formats, using micro-CT and commercially available programs. Teeth models that can be used widely would benefit all those who study dental morphology.
User's guide for mapIMG 3--Map image re-projection software package
Finn, Michael P.; Mattli, David M.
2012-01-01
Version 0.0 (1995), Dan Steinwand, U.S. Geological Survey (USGS)/Earth Resources Observation Systems (EROS) Data Center (EDC)--Version 0.0 was a command line version for UNIX that required four arguments: the input metadata, the output metadata, the input data file, and the output destination path. Version 1.0 (2003), Stephen Posch and Michael P. Finn, USGS/Mid-Continent Mapping Center (MCMC--Version 1.0 added a GUI interface that was built using the Qt library for cross platform development. Version 1.01 (2004), Jason Trent and Michael P. Finn, USGS/MCMC--Version 1.01 suggested bounds for the parameters of each projection. Support was added for larger input files, storage of the last used input and output folders, and for TIFF/ GeoTIFF input images. Version 2.0 (2005), Robert Buehler, Jason Trent, and Michael P. Finn, USGS/National Geospatial Technical Operations Center (NGTOC)--Version 2.0 added Resampling Methods (Mean, Mode, Min, Max, and Sum), updated the GUI design, and added the viewer/pre-viewer. The metadata style was changed to XML and was switched to a new naming convention. Version 3.0 (2009), David Mattli and Michael P. Finn, USGS/Center of Excellence for Geospatial Information Science (CEGIS)--Version 3.0 brings optimized resampling methods, an updated GUI, support for less than global datasets, UTM support and the whole codebase was ported to Qt4.
Visualization of GPM Standard Products at the Precipitation Processing System (PPS)
NASA Astrophysics Data System (ADS)
Kelley, O.
2010-12-01
Many of the standard data products for the Global Precipitation Measurement (GPM) constellation of satellites will be generated at and distributed by the Precipitation Processing System (PPS) at NASA Goddard. PPS will provide several means to visualize these data products. These visualization tools will be used internally by PPS analysts to investigate potential anomalies in the data files, and these tools will also be made available to researchers. Currently, a free data viewer called THOR, the Tool for High-resolution Observation Review, can be downloaded and installed on Linux, Windows, and Mac OS X systems. THOR can display swath and grid products, and to a limited degree, the low-level data packets that the satellite itself transmits to the ground system. Observations collected since the 1997 launch of the Tropical Rainfall Measuring Mission (TRMM) satellite can be downloaded from the PPS FTP archive, and in the future, many of the GPM standard products will also be available from this FTP site. To provide easy access to this 80 terabyte and growing archive, PPS currently operates an on-line ordering tool called STORM that provides geographic and time searches, browse-image display, and the ability to order user-specified subsets of standard data files. Prior to the anticipated 2013 launch of the GPM core satellite, PPS will expand its visualization tools by integrating an on-line version of THOR within STORM to provide on-the-fly image creation of any portion of an archived data file at a user-specified degree of magnification. PPS will also provide OpenDAP access to the data archive and OGC WMS image creation of both swath and gridded data products. During the GPM era, PPS will continue to provide realtime globally-gridded 3-hour rainfall estimates to the public in a compact binary format (3B42RT) and in a GIS format (2-byte TIFF images + ESRI WorldFiles).
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reyhan, M; Yue, N
Purpose: To validate an automated image processing algorithm designed to detect the center of radiochromic film used for in vivo film dosimetry against the current gold standard of manual selection. Methods: An image processing algorithm was developed to automatically select the region of interest (ROI) in *.tiff images that contain multiple pieces of radiochromic film (0.5x1.3cm{sup 2}). After a user has linked a calibration file to the processing algorithm and selected a *.tiff file for processing, an ROI is automatically detected for all films by a combination of thresholding and erosion, which removes edges and any additional markings for orientation.more » Calibration is applied to the mean pixel values from the ROIs and a *.tiff image is output displaying the original image with an overlay of the ROIs and the measured doses. Validation of the algorithm was determined by comparing in vivo dose determined using the current gold standard (manually drawn ROIs) versus automated ROIs for n=420 scanned films. Bland-Altman analysis, paired t-test, and linear regression were performed to demonstrate agreement between the processes. Results: The measured doses ranged from 0.2-886.6cGy. Bland-Altman analysis of the two techniques (automatic minus manual) revealed a bias of -0.28cGy and a 95% confidence interval of (5.5cGy,-6.1cGy). These values demonstrate excellent agreement between the two techniques. Paired t-test results showed no statistical differences between the two techniques, p=0.98. Linear regression with a forced zero intercept demonstrated that Automatic=0.997*Manual, with a Pearson correlation coefficient of 0.999. The minimal differences between the two techniques may be explained by the fact that the hand drawn ROIs were not identical to the automatically selected ones. The average processing time was 6.7seconds in Matlab on an IntelCore2Duo processor. Conclusion: An automated image processing algorithm has been developed and validated, which will help minimize user interaction and processing time of radiochromic film used for in vivo dosimetry.« less
A web-based subsetting service for regional scale MODIS land products
DOE Office of Scientific and Technical Information (OSTI.GOV)
SanthanaVannan, Suresh K; Cook, Robert B; Holladay, Susan K
2009-12-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) sensor has provided valuable information on various aspects of the Earth System since March 2000. The spectral, spatial, and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth System processes at regional, continental, and global scales. The size of the MODIS product and native HDF-EOS format are not optimal for use in field investigations at individual sites (100 - 100 km or smaller). In order to make MODIS data readily accessible for field investigations, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemicalmore » Dynamics at Oak Ridge National Laboratory (ORNL) has developed an online system that provides MODIS land products in an easy-to-use format and in file sizes more appropriate to field research. This system provides MODIS land products data in a nonproprietary comma delimited ASCII format and in GIS compatible formats (GeoTIFF and ASCII grid). Web-based visualization tools are also available as part of this system and these tools provide a quick snapshot of the data. Quality control tools and a multitude of data delivery options are available to meet the demands of various user communities. This paper describes the important features and design goals for the system, particularly in the context of data archive and distribution for regional scale analysis. The paper also discusses the ways in which data from this system can be used for validation, data intercomparison, and modeling efforts.« less
A Pyramid Scheme for Constructing Geologic Maps on Geobrowsers
NASA Astrophysics Data System (ADS)
Whitmeyer, S. J.; de Paor, D. G.; Daniels, J.; Jeremy, N.; Michael, R.; Santangelo, B.
2008-12-01
Hundreds of geologic maps have been draped onto Google Earth (GE) using the ground overlay tag of Keyhole Markup Language (KML) and dozens have been published on academic and survey web pages as downloadable KML or KMZ (zipped KML) files. The vast majority of these are small KML docs that link to single, large - often very large - image files (jpegs, tiffs, etc.) Files that exceed 50 MB in size defeat the purpose of GE as an interactive and responsive, and therefore fast, virtual terrain medium. KML supports super-overlays (a.k.a. image pyramids), which break large graphic files into manageable tiles that load only when they are in the visible region at a sufficient level of detail (LOD), and several automatic tile-generating applications have been written. The process of exporting map data from applications such as ArcGIS® to KML format is becoming more manageable but still poses challenges. Complications arise, for example, because of differences between grid-north at a point on a map and true north at the equivalent location on the virtual globe. In our recent field season, we devised ways of overcoming many of these obstacles in order to generate responsive, panable, zoomable geologic maps in which data is layered in a pyramid structure similar to the image pyramid used for default GE terrain. The structure of our KML code for each level of the pyramid is self-similar: (i) check whether the current tile is in the visible region, (ii) if so, render the current overlay, (iii) add the current data level, and (iv) using four network links, check the visibility and LOD of four nested tiles. By using this pyramid structure we provide the user with access to geologic and map data at multiple levels of observation. For example, when the viewpoint is distant, regional structures and stratigraphy (e.g. lithological groups and terrane boundaries) are visible. As the user zooms to lower elevations, formations and ultimately individual outcrops come into focus. The pyramid structure is ideally suited to geologic data which tends to be unevenly exposed across the earth's surface.
CT Scans of Cores Metadata, Barrow, Alaska 2015
Katie McKnight; Tim Kneafsey; Craig Ulrich
2015-03-11
Individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, throughout 2013 and 2014. Cores were drilled along different transects to sample polygonal features (i.e. the trough, center and rim of high, transitional and low center polygons). Most cores were drilled around 1 meter in depth and a few deep cores were drilled around 3 meters in depth. Three-dimensional images of the frozen cores were constructed using a medical X-ray computed tomography (CT) scanner. TIFF files can be uploaded to ImageJ (an open-source imaging software) to examine soil structure and densities within each core.
EPA Remote Sensing Information Gateway
NASA Astrophysics Data System (ADS)
Paulsen, H. K.; Szykman, J. J.; Plessel, T.; Freeman, M.; Dimmick, F.
2009-12-01
The Remote Sensing Information Gateway was developed by the U.S. Environmental Protection Agency (EPA) to assist researchers in easily obtaining and combining a variety of environmental datasets related to air quality research. Current datasets available include, but are not limited to surface PM2.5 and O3 data, satellite derived aerosol optical depth , and 3-dimensional output from U.S. EPA's Models 3/Community Multi-scale Air Quality (CMAQ) modeling system. The presentation will include a demonstration that illustrates several scenarios of how researchers use the tool to help them visualize and obtain data for their work; with a particular focus on episode analysis related to biomass burning impacts on air quality. The presentation will provide an overview on how RSIG works and how the code has been—and can be—adapted for other projects. One example is the Virtual Estuary, which focuses on automating the retrieval and pre-processing of a variety of data needed for estuarine research. RSIG’s source codes are freely available to researchers with permission from the EPA principal investigator, Dr. Jim Szykman. RSIG is available to the community and can be accessed online at http://www.epa.gov/rsig. Once the JAVA policy file is configured on your computer you can run the RSIG applet on your computer and connect to the RSIG server to visualize and retrieve available data sets. The applet allows the user to specify the temporal/spatial areas of interest, and the types of data to retrieve. The applet then communicates with RSIG subsetter codes located on the data owners’ remote servers; the subsetter codes assemble and transfer via ordinary Internet protocols only the specified data to the researcher’s computer. This is much faster than the usual method of transferring large files via FTP and greatly reduces network traffic. The RSIG applet then visualizes the transferred data on a latitude-longitude map, automatically locating the data in the correct geographic position. Images, animations, and aggregated data can be saved or exported in a variety of data formats: Binary External Data Representation (XDR) format (simple, efficient), NetCDF-COARDS format, NetCDF-IOAPI format (regridding the data to a CMAQ grid), HDF (unsubsetted satellite files), ASCII tab-delimited spreadsheet, MCMC (used for input into HB model), PNG images, MPG movies, KMZ movies (for display in Google Earth and similar applications), GeoTIFF RGB format and 32-bit float format. RSIG’s source codes are freely available to researchers with permission from the EPA. Contacts for obtaining RSIG code are available at the RSIG website.
NASA Astrophysics Data System (ADS)
Santhana Vannan, S. K.; Boyer, A.; Deb, D.; Beaty, T.; Wei, Y.; Wei, Z.
2017-12-01
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) for biogeochemical dynamics is one of the NASA Earth Observing System Data and Information System (EOSDIS) data centers. ORNL DAAC (https://daac.ornl.gov) is responsible for data archival, product development and distribution, and user support for biogeochemical and ecological data and models. In particular, ORNL DAAC has been providing data management support for NASA's terrestrial ecology field campaign programs for the last several decades. Field campaigns combine ground, aircraft, and satellite-based measurements in specific ecosystems over multi-year time periods. The data collected during NASA field campaigns are archived at the ORNL DAAC (https://daac.ornl.gov/get_data/). This paper describes the effort of the ORNL DAAC team for data rescue of a First ISLSCP Field Experiment (FIFE) dataset containing airborne and satellite data observations from the 1980s. The data collected during the FIFE campaign contain high resolution aerial imageries collected over Kansas. The data rescue workflow was prepared to test for successful recovery of the data from a CD-ROM and to ensure that the data are usable and preserved for the future. The imageries contain spectral reflectance data that can be used as a historical benchmark to examine climatological and ecological changes in the Kansas region since the 1980s. Below are the key steps taken to convert the files to modern standards. Decompress the imageries using custom compression software provided with the data. The compression algorithm created for MS-DOS in 1980s had to be set up to run on modern computer systems. Decompressed files were geo-referenced by using metadata information stored in separate compressed header files. Standardized file names were applied (File names and details were described in separate readme documents). Image files were converted to GeoTIFF format with embedded georeferencing information. Leverage Open Geospatial Consortium (OGC) Web services to provide dynamic data transformation and visualization. We will describe the steps in detail and share lessons learned during the AGU session.
About a method for compressing x-ray computed microtomography data
NASA Astrophysics Data System (ADS)
Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš
2018-04-01
The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.
Paleozoic and mesozoic GIS data from the Geologic Atlas of the Rocky Mountain Region: Volume 1
Graeber, Aimee; Gunther, Gregory
2017-01-01
The Rocky Mountain Association of Geologists (RMAG) is, once again, publishing portions of the 1972 Geologic Atlas of the Rocky Mountain Region (Mallory, ed., 1972) as a geospatial map and data package. Georeferenced tiff (Geo TIFF) images of map figures from this atlas has served as the basis for these data products. Shapefiles and file geodatabase features have been generated and cartographically represented for select pages from the following chapters:• Phanerozoic Rocks (page 56)• Cambrian System (page 63)• Ordovician System (pages 78 and 79)• Silurian System (pages 87 - 89)• Devonian System (pages 93, 94, and 96 - 98)• Mississippian System (pages 102 and 103)• Pennsylvanian System (pages 114 and 115)• Permian System (pages 146 and 149 - 154)• Triassic System (pages 168 and 169)• Jurassic System (pages 179 and 180)• Cretaceous System (pages 197 - 201, 207 - 210, 215, - 218, 221, 222, 224, 225, and 227).The primary purpose of this publication is to provide regional-scale, as well as local-scale, geospatial data of the Rocky Mountain Region for use in geoscience studies. An important aspect of this interactive map product is that it does not require extensive GIS experience or highly specialized software.
Pettiette, M T; Metzger, Z; Phillips, C; Trope, M
1999-04-01
Straightening of curved canals is one of the most common procedural errors in endodontic instrumentation. This problem is commonly encountered when dental students perform molar endodontics. The purpose of this study was to compare the effect of the type of instrument used by these students on the extent of straightening and on the incidence of other endodontic procedural errors. Nickel-titanium 0.02 taper hand files were compared with traditional stainless-steel 0.02 taper K-files. Sixty molar teeth comprised of maxillary and mandibular first and second molars were treated by senior dental students. Instrumentation was with either nickel-titanium hand files or stainless-steel K-files. Preoperative and postoperative radiographs of each tooth were taken using an XCP precision instrument with a customized bite block to ensure accurate reproduction of radiographic angulation. The radiographs were scanned and the images stored as TIFF files. By superimposing tracings from the preoperative over the postoperative radiographs, the degree of deviation of the apical third of the root canal filling from the original canal was measured. The presence of other errors, such as strip perforation and instrument breakage, was established by examining the radiographs. In curved canals instrumented by stainless-steel K-files, the average deviation of the apical third of the canals was 14.44 degrees (+/- 10.33 degrees). The deviation was significantly reduced when nickel-titanium hand files were used to an average of 4.39 degrees (+/- 4.53 degrees). The incidence of other procedural errors was also significantly reduced by the use of nickel-titanium hand files.
High-resolution seismic-reflection data offshore of Dana Point, southern California borderland
Sliter, Ray W.; Ryan, Holly F.; Triezenberg, Peter J.
2010-01-01
The U.S. Geological Survey collected high-resolution shallow seismic-reflection profiles in September 2006 in the offshore area between Dana Point and San Mateo Point in southern Orange and northern San Diego Counties, California. Reflection profiles were located to image folds and reverse faults associated with the San Mateo fault zone and high-angle strike-slip faults near the shelf break (the Newport-Inglewood fault zone) and at the base of the slope. Interpretations of these data were used to update the USGS Quaternary fault database and in shaking hazard models for the State of California developed by the Working Group for California Earthquake Probabilities. This cruise was funded by the U.S. Geological Survey Coastal and Marine Catastrophic Hazards project. Seismic-reflection data were acquired aboard the R/V Sea Explorer, which is operated by the Ocean Institute at Dana Point. A SIG ELC820 minisparker seismic source and a SIG single-channel streamer were used. More than 420 km of seismic-reflection data were collected. This report includes maps of the seismic-survey sections, linked to Google Earth? software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats.
Psyplot: Visualizing rectangular and triangular Climate Model Data with Python
NASA Astrophysics Data System (ADS)
Sommer, Philipp
2016-04-01
The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.
Training system for digital mammographic diagnoses of breast cancer
NASA Astrophysics Data System (ADS)
Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.
2013-03-01
As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.
Application of a digital technique in evaluating the reliability of shade guides.
Cal, E; Sonugelen, M; Guneri, P; Kesercioglu, A; Kose, T
2004-05-01
There appears to be a need for a reliable method for quantification of tooth colour and analysis of shade. Therefore, the primary objective of this study was to show the applicability of graphic software in colour analysis and secondly to investigate the reliability of commercial shade guides produced by the same manufacturer, using this digital technique. After confirming the reliability and reproducibility of the digital method by using self-assessed coloured images, three shade guides of the same manufacturer were photographed in daylight and in studio environments with a digital camera and saved in tagged image file format (TIFF) format. Colour analysis of each photograph was performed using the Adobe Photoshop 4.0 graphic program. Luminosity, and red, green, blue (L and RGB) values of each shade tab of each shade guide were measured and the data were subjected to statistical analysis using the repeated measure Anova test. The L and RGB values of the images taken in daylight differed significantly from those of the images taken in studio environment (P < 0.05). In both environments, the luminosity and red values of the shade tabs were significantly different from each other (P < 0.05). It was concluded that, when the environmental conditions were kept constant, the Adobe Photoshop 4.0 colour analysis program could be used to analyse the colour of images. On the other hand, the results revealed that the accuracy of shade tabs widely being used in colour matching should be readdressed.
Beebook: light field mapping app
NASA Astrophysics Data System (ADS)
De Donatis, Mauro; Di Pietro, Gianfranco; Rinnone, Fabio
2014-05-01
In the last decade the mobile systems for field digital mapping were developed (see Wikipedia for "Digital geologic mapping"), also against many skeptic traditional geologists. Until now, hardware was often heavy (tablet PC) and software sometime difficult also for expert GIS users. At present, the advent of light tablet and applications makes things easier, but we are far to find a whole solution for a complex survey like the geological one where you have to manage complexities such information, hypothesis, data, interpretation. Beebook is a new app for Android devices, has been developed for fast ad easy mapping work in the field trying to try to solve this problem. The main features are: • off-line raster management, GeoTIFF ed other raster format using; • on-line map visualisation (Google Maps, OSM, WMS, WFS); • SR management and conversion using PROJ.4; • vector file mash-up (KML and SQLite format); • editing of vector data on the map (lines, points, polygons); • augmented reality using "Mixare" platform; • export of vector data in KML, CSV, SQLite (Spatialite) format; • note: GPS or manual point inserting linked to other application files (pictures, spreadsheet, etc.); • form: creation, edition and filling of customized form; • GPS: status control, tracker and positioning on map; • sharing: synchronization and sharing of data, forms, positioning and other information can be done among users. The input methods are different from digital keyboard to fingers touch, from voice recording to stylus. In particular the most efficient way of inserting information is the stylus (or pen): field geologists are familiar with annotation and sketches. Therefore we suggest the use of devices with stylus. The main point is that Beebook is the first "transparent" mobile GIS for tablet and smartphone deriving from previous experience as traditional mapping and different previous digital mapping software ideation and development (MapIT, BeeGIS, Geopaparazzi). Deriving from those experiences, we developed a tool which is easy to use and applicable not only for geology but also to every field survey.
Mosaic of Digital Raster Soviet Topographic Maps of Afghanistan
Chirico, Peter G.; Warner, Michael B.
2005-01-01
EXPLANATION The data contained in this publication include scanned, geographically referenced digital raster graphics (DRGs) of Soviet 1:200,000 - scale topographic map quadrangles. The original Afghanistan topographic map series at 1:200,000 scale, for the entire country, was published by the Soviet military between 1985 and 1991(MTDGS, 85-91). Hard copies of these original paper maps were scanned using a large format scanner, reprojected into Geographic Coordinate System (GCS) coordinates, and then clipped to remove the map collars to create a seamless, topographic map base for the entire country. An index of all available topographic map sheets is displayed here: Index_Geo_DD.pdf. This publication also includes the originial topographic map quadrangles projected in Universal Transverse Mercator (UTM) projection. The country of Afghanistan spans three UTM Zones: Zone 41, Zone 42, and Zone 43. Maps are stored as GeoTIFFs in their respective UTM zone projection. Indexes of all available topographic map sheets in their respective UTM zone are displayed here: Index_UTM_Z41.pdf, Index_UTM_Z42.pdf, Index_UTM_Z43.pdf. An Adobe Acrobat PDF file of the U.S. Department of the Army's Technical Manual 30-548, is available (U.S. Army, 1958). This document has been translated into English for assistance in reading Soviet topographic map symbols.
Sliter, Ray W.; Triezenberg, Peter J.; Hart, Patrick E.; Watt, Janet T.; Johnson, Samuel Y.; Scheirer, Daniel S.
2009-01-01
The U.S. Geological Survey (USGS) collected high-resolution shallow seismic-reflection and marine magnetic data in June 2008 in the offshore areas between the towns of Cayucos and Pismo Beach, Calif., from the nearshore (~6-m depth) to just west of the Hosgri Fault Zone (~200-m depth). These data are in support of the California State Waters Mapping Program and the Cooperative Research and Development Agreement (CRADA) between the Pacific Gas & Electric Co. and the U.S. Geological Survey. Seismic-reflection and marine magnetic data were acquired aboard the R/V Parke Snavely, using a SIG 2Mille minisparker seismic source and a Geometrics G882 cesium-vapor marine magnetometer. More than 550 km of seismic and marine magnetic data was collected simultaneously along shore-perpendicular transects spaced 800 m apart, with an additional 220 km of marine magnetometer data collected across the Hosgri Fault Zone, resulting in spacing locally as smallas 400 m. This report includes maps of the seismic-survey sections, linked to Google Earth software, and digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats, as well as preliminary gridded marine-magnetic-anomaly and residual-magnetic-anomaly (shallow magnetic source) maps.
Terrain - Umbra Package v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oppel, Fred; Hart, Brian; Rigdon, James Brian
This library contains modules that read terrain files (e.g., OpenFlight, Open Scene Graph IVE, GeoTIFF Image) and to read and manage ESRI terrain datasets. All data is stored and managed in Open Scene Graph (OSG). Terrain system accesses OSG and provides elevation data, access to meta-data such as soil types and enables linears, areals and buildings to be placed in a terrain, These geometry objects include boxes, point, path, and polygon (region), and sector modules. Utilities have been made available for clamping objects to the terrain and accessing LOS information. This assertion includes a managed C++ wrapper code (TerrainWrapper) tomore » enable C# applications, such as OpShed and UTU, to incorporate this library.« less
USAID Expands eMODIS Coverage for Famine Early Warning
NASA Astrophysics Data System (ADS)
Jenkerson, C.; Meyer, D. J.; Evenson, K.; Merritt, M.
2011-12-01
Food security in countries at risk is monitored by U.S. Agency for International Development (USAID) through its Famine Early Warning Systems Network (FEWS NET) using many methods including Moderate Resolution Imaging Spectroradiometer (MODIS) data processed by U.S. Geological Survey (USGS) into eMODIS Normalized Difference Vegetation Index (NDVI) products. Near-real time production is used comparatively with trends derived from the eMODIS archive to operationally monitor vegetation anomalies indicating threatened cropland and rangeland conditions. eMODIS production over Central America and the Caribbean (CAMCAR) began in 2009, and processes 10-day NDVI composites every 5 days from surface reflectance inputs produced using predicted spacecraft and climatology information at Land and Atmosphere Near real time Capability for Earth Observing Systems (EOS) (LANCE). These expedited eMODIS composites are backed by a parallel archive of precision-based NDVI calculated from surface reflectance data ordered through Level 1 and Atmosphere Archive and Distribution System (LAADS). Success in the CAMCAR region led to the recent expansion of eMODIS production to include Africa in 2010, and Central Asia in 2011. Near-real time 250-meter products are available for each region on the last day of an acquisition interval (generally before midnight) from an anonymous file transfer protocol (FTP) distribution site (ftp://emodisftp.cr.usgs.gov/eMODIS). The FTP site concurrently hosts the regional historical collections (2000 to present) which are also searchable using the USGS Earth Explorer (http://edcsns17.cr.usgs.gov/NewEarthExplorer). As eMODIS coverage continues to grow, these geographically gridded, georeferenced tagged image file format (GeoTIFF) NDVI composites increase their utility as effective tools for operational monitoring of near-real time vegetation data against historical trends.
eMODIS: A User-Friendly Data Source
Jenkerson, Calli B.; Maiersperger, Thomas; Schmidt, Gail
2010-01-01
The U.S. Geological Survey's (USGS) Earth Resources Observation and Science (EROS) Center is generating a suite of products called 'eMODIS' based on Moderate Resolution Imaging Spectroradiometer (MODIS) data acquired by the National Aeronautics and Space Administration's (NASA) Earth Observing System (EOS). With a more frequent repeat cycle than Landsat and higher spatial resolutions than the Advanced Very High Resolution Spectroradiometer (AVHRR), MODIS is well suited for vegetation studies. For operational monitoring, however, the benefits of MODIS are counteracted by usability issues with the standard map projection, file format, composite interval, high-latitude 'bow-tie' effects, and production latency. eMODIS responds to a community-specific need for alternatively packaged MODIS data, addressing each of these factors for real-time monitoring and historical trend analysis. eMODIS processes calibrated radiance data (level-1B) acquired by the MODIS sensors on the EOS Terra and Aqua satellites by combining MODIS Land Science Collection 5 Atmospherically Corrected Surface Reflectance production code and USGS EROS MODIS Direct Broadcast System (DBS) software to create surface reflectance and Normalized Difference Vegetation Index (NDVI) products. eMODIS is produced over the continental United States and over Alaska extending into Canada to cover the Yukon River Basin. The 250-meter (m), 500-m, and 1,000-m products are delivered in Geostationary Earth Orbit Tagged Image File Format (Geo- TIFF) and composited in 7-day intervals. eMODIS composites are projected to non-Sinusoidal mapping grids that best suit the geography in their areas of application (see eMODIS Product Description below). For eMODIS products generated over the continental United States (eMODIS CONUS), the Terra (from 2000) and Aqua (from 2002) records are available and continue through present time. eMODIS CONUS also is generated in an expedited process that delivers a 7-day rolling composite, created daily with the most recent 7 days of acquisition, to users monitoring real-time vegetation conditions. eMODIS Alaska is not part of expedited processing, but does cover the Terra mission life (2000-present). A simple file transfer protocol (FTP) distribution site currently is enabled on the Internet for direct download of eMODIS products (ftp://emodisftp.cr.usgs.gov/eMODIS), with plans to expand into an interactive portal environment.
View Toward 'Vera Rubin Ridge' on Mount Sharp, Mars
2017-07-11
This look ahead from NASA's Curiosity Mars rover includes four geological layers to be examined by the mission, and higher reaches of Mount Sharp beyond the planned study area. The redder rocks of the foreground are part of the Murray formation. Pale gray rocks in the middle distance of the right half of the image are in the Clay Unit. A band between those terrains is "Vera Rubin Ridge." Rounded brown knobs beyond the Clay Unit are in the Sulfate Unit, beyond which lie higher portions of the mountain. The view combines six images taken with the rover's Mast Camera (Mastcam) on Jan. 24, 2017, during the 1,589th Martian day, or sol, of Curiosity's work on Mars, when the rover was still more than half a mile (about a kilometer) north of Vera Rubin Ridge. The panorama has been white-balanced so that the colors of the rock and sand materials resemble how they would appear under daytime lighting conditions on Earth. It spans from east-southeast on the left to south on the right. The Sol 1589 location was just north of the waypoint labeled "Ogunquit Beach" on a map of the area that also shows locations of the Murray formation, Vera Rubin Ridge, Clay Unit and Sulfate Unit. The ridge was informally named in early 2017 in memory of Vera Cooper Rubin (1928-2016), whose astronomical observations provided evidence for the existence of the universe's dark matter. Annotated and full resolution TIFF files are available at https://photojournal.jpl.nasa.gov/catalog/PIA21716
Cartographic Production for the FLaSH Map Study: Generation of Rugosity Grids, 2008
Robbins, Lisa L.; Knorr, Paul O.; Hansen, Mark
2010-01-01
Project Summary This series of raster data is a U.S. Geological Survey (USGS) Data Series release from the Florida Shelf Habitat Project (FLaSH). This disc contains two raster images in Environmental Systems Research Institute, Inc. (ESRI) raster grid format, jpeg image format, and Geo-referenced Tagged Image File Format (GeoTIFF). Data is also provided in non-image ASCII format. Rugosity grids at two resolutions (250 m and 1000 m) were generated for West Florida shelf waters to 250 m using a custom algorithm that follows the methods of Valentine and others (2004). The Methods portion of this document describes the specific steps used to generate the raster images. Rugosity, also referred to as roughness, ruggedness, or the surface-area ratio (Riley and others, 1999; Wilson and others, 2007), is a visual and quantitative measurement of terrain complexity, a common variable in ecological habitat studies. The rugosity of an area can affect biota by influencing habitat, providing shelter from elements, determining the quantity and type of living space, influencing the type and quantity of flora, affecting predator-prey relationships by providing cover and concealment, and, as an expression of vertical relief, can influence local environmental conditions such as temperature and moisture. In the marine environment rugosity can furthermore influence current flow rate and direction, increase the residence time of water in an area through eddying and current deflection, influence local water conditions such as chemistry, turbidity, and temperature, and influence the rate and nature of sedimentary deposition. State-of-the-art computer-mapping techniques and data-processing tools were used to develop shelf-wide raster and vector data layers. Florida Shelf Habitat (FLaSH) Mapping Project (http://coastal.er.usgs.gov/flash) endeavors to locate available data, identify data gaps, synthesize existing information, and expand our understanding of geologic processes in our dynamic coastal and marine systems.
SCIFIO: an extensible framework to support scientific image formats.
Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W
2016-12-07
No gold standard exists in the world of scientific image acquisition; a proliferation of instruments each with its own proprietary data format has made out-of-the-box sharing of that data nearly impossible. In the field of light microscopy, the Bio-Formats library was designed to translate such proprietary data formats to a common, open-source schema, enabling sharing and reproduction of scientific results. While Bio-Formats has proved successful for microscopy images, the greater scientific community was lacking a domain-independent framework for format translation. SCIFIO (SCientific Image Format Input and Output) is presented as a freely available, open-source library unifying the mechanisms of reading and writing image data. The core of SCIFIO is its modular definition of formats, the design of which clearly outlines the components of image I/O to encourage extensibility, facilitated by the dynamic discovery of the SciJava plugin framework. SCIFIO is structured to support coexistence of multiple domain-specific open exchange formats, such as Bio-Formats' OME-TIFF, within a unified environment. SCIFIO is a freely available software library developed to standardize the process of reading and writing scientific image formats.
Vaughan, R. Greg; Heasler, Henry; Jaworowski, Cheryl; Lowenstern, Jacob B.; Keszthelyi, Laszlo P.
2014-01-01
Maps that define the current distribution of geothermally heated ground are useful toward setting a baseline for thermal activity to better detect and understand future anomalous hydrothermal and (or) volcanic activity. Monitoring changes in the dynamic thermal areas also supports decisions regarding the development of Yellowstone National Park infrastructure, preservation and protection of park resources, and ensuring visitor safety. Because of the challenges associated with field-based monitoring of a large, complex geothermal system that is spread out over a large and remote area, satellite-based thermal infrared images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) were used to map the location and spatial extent of active thermal areas, to generate thermal anomaly maps, and to quantify the radiative component of the total geothermal heat flux. ASTER thermal infrared data acquired during winter nights were used to minimize the contribution of solar heating of the surface. The ASTER thermal infrared mapping results were compared to maps of thermal areas based on field investigations and high-resolution aerial photos. Field validation of the ASTER thermal mapping is an ongoing task. The purpose of this report is to make available ASTER-based maps of Yellowstone’s thermal areas. We include an appendix containing the names and characteristics of Yellowstone’s thermal areas, georeferenced TIFF files containing ASTER thermal imagery, and several spatial data sets in Esri shapefile format.
Sliter, Ray W.; Triezenberg, Peter J.; Hart, Patrick E.; Draut, Amy E.; Normark, William R.; Conrad, James E.
2008-01-01
The U.S. Geological Survey (USGS) collected high-resolution shallow seismic-reflection data in September, 2007, and June-July, 2008, from the continental shelf offshore of southern California between Gaviota and Mugu Canyon, in support of the California's State Waters Mapping Program. Data were acquired using SIG 2mille mini-sparker and Edgetech chirp 512 instruments aboard the R/V Zephyr (Sept. 2007) and R/V Parke Snavely (June-July 2008). The survey area spanned approximately 120 km of coastline, and included shore-perpendicular transects spaced 1.0-1.5 km apart that extended offshore to at least the 3-mile limit of State waters, in water depths ranging from 10 m near shore to 300 m near the offshore extent of Mugu and Hueneme submarine canyons. Subbottom acoustic penetration spanned tens to several hundred meters, variable by location. This report includes maps of the surveyed transects, linked to Google Earth software, as well as digital data files showing images of each transect in SEG-Y, JPEG, and TIFF formats. The images of sediment deposits, tectonic structure, and natural-gas seeps collected during this study provide geologic information that is essential to coastal zone and resource management at Federal, State and local levels, as well as to future research on the sedimentary, tectonic, and climatic record of southern California.
NASA Astrophysics Data System (ADS)
Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.
2007-12-01
The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through standards which promote interoperability. Our GIS related products provide Open Geospatial Consortium (OGC) compliant Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Federal Geographic Data Committee (FGDC) metadata as a complement to the map viewers. KML/KMZ data files (soon to be compliant OGC specifications) also provide access.
NASA Astrophysics Data System (ADS)
Barache, C.; Bouquillon, S.; Carlucci, T.; Taris, F.; Michel, L.; Altmann, M.
2013-10-01
The Ground Based Optical Tracking (GBOT) group is a part of the Data Processing and Analysis Consortium, the large consortium of over 400 scientists from many European countries, charged with the scientific conduction of the Gaia mission by ESA. The GBOT group is in charge of the optical part of tracking of the Gaia satellite. This optical tracking is necessary to allow the Gaia mission to fully reach its goal in terms of astrometry precision level. These observations will be done daily, during the 5 years of the mission, with the use of optical CCD frames taken by a small network of 1-2m class telescopes located all over the world. The requirements for the accuracy on the satellite position determination, with respect of the stars in the field of view, are 20 mas. These optical satellite positions will be sent weekly by GBOT to the SOC of ESAC and used with other kinds of observations (radio ranging and Doppler) by MOC of ESOC to improve the Gaia ephemeris. For this purpose, we developed a set of accurate astrometry reduction programs specially adapted for tracking moving objects. The inputs of these programs for each tracked target are an ephemeris and a set of FITS images. The outputs for each image are: a file containing all information about the detected objects, a catalogue file used for calibration, a TIFF file for visual explanation of the reduction result, and an improvement of the fits image header. The final result is an overview file containing only the data related to the target extracted from all the images. These programs are written in GNU Fortran 95 and provide results in VOTable format (supported by Virtual Observatory protocols). All these results are sent automatically into the GBOT Database which is built with the SAADA freeware. The user of this Database can archive and query the data but also, thanks to the delegate option provided by SAADA, select a set of images and directly run the GBOT reduction programs with a dedicated Web interface. For more information about SAADA (an Automatic System for Astronomy Data Archive under GPL license and VOcompatible), see the related paper Michel et al. (2013).
Access to Land Data Products Through the Land Processes DAAC
NASA Astrophysics Data System (ADS)
Klaassen, A. L.; Gacke, C. K.
2004-12-01
The Land Processes Distributed Active Archive Center (LP DAAC) was established as part of NASA's Earth Observing System (EOS) Data and Information System (EOSDIS) initiative to process, archive, and distribute land-related data collected by EOS sensors, thereby promoting the inter-disciplinary study and understanding of the integrated Earth system. The LP DAAC is responsible for archiving, product development, distribution, and user support of Moderate Resolution Imaging Spectroradiometer (MODIS) land products derived from data acquired by the Terra and Aqua satellites and processing and distribution of Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data products. These data are applied in scientific research, management of natural resources, emergency response to natural disaster, and Earth Science Education. There are several web interfaces by which the inventory may be searched and the products ordered. The LP DAAC web site (http://lpdaac.usgs.gov/) provides product-specific information and links to data access tools. The primary search and order tool is the EOS Data Gateway (EDG) (http://edcimswww.cr.usgs.gov/pub/imswelcome/) that allows users to search data holdings, retrieve descriptions of data sets, view browse images, and place orders. The EDG is the only tool to search the entire inventory of ASTER and MODIS products available from the LP DAAC. The Data Pool (http://lpdaac.usgs.gov/datapool/datapool.asp) is an online archive that provides immediate FTP access to selected LP DAAC data products. The data can be downloaded by going directly to the FTP site, where you can navigate to the desired granule, metadata file or browse image. It includes the ability to convert files from the standard HDF-EOS data format into GeoTIFF, to change the data projections, or perform spatial subsetting by using the HDF-EOS to GeoTIFF Converter (HEG) for selected data types. The Browse Tool also known as the USGS Global Visualization Viewer (http://lpdaac.usgs.gov/aster/glovis.asp) provides a easy online method to search, browse, and order the LP DAAC ASTER and MODIS land data by viewing browse images to define spatial and temporal queries. The LP DAAC User Services Office is the interface for support for the ASTER and MODIS data products and services. The user services representatives are available to answer questions, assist with ordering data, technical support and referrals, and provide information on a variety of tools available to assist in data preparation. The LP DAAC User Services contact information is: LP DAAC User Services U.S. Geological Survey EROS Data Center 47914 252nd Street Sioux Falls, SD 57198-0001 Voice: (605) 594-6116 Toll Free: 866-573-3222 Fax: 605-594-6963 E-mail: edc@eos.nasa.gov "This abstract was prepared under Contract number 03CRCN0001 between SAIC and U.S. Geological Survey. Abstract has not been reviewed for conformity with USGS editorial standards and has been submitted for approval by the USGS Director."
Washington Geothermal Play Fairway Analysis Heat, Permeability, and Fracture Model Data
Steely, Alex; Forson, Corina; Cladouhos, Trenton; Swyer, Mike; Davatzes, Nicholas; Anderson, Megan; Ritzinger, Brent; Glen, Jonathan; Peacock, Jared; Schermerhorn, William
2017-12-07
This submission contains raster and vector data for the entire state of Washington, with specific emphasis on the three geothermal play fairway sites: Mount St. Helens seismic zone (MSHSZ), Wind River valley (WRV), and Mount Baker (MB). Data are provided for 3 major geothermal models: heat, permeability, and fluid-filled fractures, and an additional infrastructure model. Both of the permeability and fluid-filled-fracture models are produced at 200 m and at 2 km depths; the heat model is only produced at the 200 m depth. Values are provided for both model favorability and model confidence. A combined model at 200m and 2 km depths is provided for favorability, confidence, and exploration risk. Raster data are provided in GeoTiff format and have a statewide coverage. Cell size is 104.355 ft; file type is unsigned 8-bit integer (0-255); 0 represents no favorability or confidence; 255 represents maximum favorability or confidence. The NAD83(HARN)/Washington South (ftUS) projection is used (EPSG:2927). Vector data are provided in shapefile or comma-delimited text file formats. Geographic coordinates, where provided, are in WGS84. A readme file accompanies each folder and provides an overview and description of the enclosed data. The heat model combines 5 intermediate raster layers (which are included in the download package): temperature gradient wells, young volcanic vents, hot springs, young intrusive volcanic rocks, and geothermometry. The permeability model combines 8 intermediate raster layers: density of mapped faults, 2D dilation tendency of mapped faults, 2D slip tendency of mapped faults, seismicity, 3D dilation tendency, 3D slip tendency, 3D maximum coulomb shear stress, and 3D slip gradients. The fluid-filled fracture model combines up to 4 intermediate rasters: resistivity from magneto-telluric 3D inversions, seismicity, Vp/Vs anomalies from passive seismic tomography, and Vs anomalies from ambient-noise tomography. A statewide infrastructure model is also provided that formalizes land-use constraints and restrictions relevant for geothermal prospecting and development. This model combines 10 intermediate rasters: areas off limits to drilling, existing or proposed geothermal leases, DNR-owned land, land-use restrictions along the Columbia River Gorge, areas inundated by water, availability of potential process water, proximity to existing roads, proximity to transmission lines, distance from urban areas, and snow-related elevation restrictions. Supporting vector data for the development of each raster layer is provided. For details on the areas of interest and modeling process please see the 'WA_State_Play_Fairway_Phase_2_Technical_Report' in the download package.
Visualization of small scale structures on high resolution DEMs
NASA Astrophysics Data System (ADS)
Kokalj, Žiga; Zakšek, Klemen; Pehani, Peter; Čotar, Klemen; Oštir, Krištof
2015-04-01
Knowledge on the terrain morphology is very important for observation of numerous processes and events and digital elevation models are therefore one of the most important datasets in geographic analyses. Furthermore, recognition of natural and anthropogenic microrelief structures, which can be observed on detailed terrain models derived from aerial laser scanning (lidar) or structure-from-motion photogrammetry, is of paramount importance in many applications. In this paper we thus examine and evaluate methods of raster lidar data visualization for the determination (recognition) of microrelief features and present a series of strategies to assist selecting the preferred visualization of choice for structures of various shapes and sizes, set in varied landscapes. Often the answer is not definite and more frequently a combination of techniques has to be used to map a very diverse landscape. Researchers can only very recently benefit from free software for calculation of advanced visualization techniques. These tools are often difficult to understand, have numerous options that confuse the user, or require and produce non-standard data formats, because they were written for specific purposes. We therefore designed the Relief Visualization Toolbox (RVT) as a free, easy-to-use, standalone application to create visualisations from high-resolution digital elevation data. It is tailored for the very beginners in relief interpretation, but it can also be used by more advanced users in data processing and geographic information systems. It offers a range of techniques, such as simple hillshading and its derivatives, slope gradient, trend removal, positive and negative openness, sky-view factor, and anisotropic sky-view factor. All included methods have been proven to be effective for detection of small scale features and the default settings are optimised to accomplish this task. However, the usability of the tool goes beyond computation for visualization purposes, as sky-view factor, for example, is an essential variable in many fields, e.g. in meteorology. RVT produces two types of results: 1) the original files have a full range of values and are intended for further analyses in geographic information systems, 2) the simplified versions are histogram stretched for visualization purposes and saved as 8-bit GeoTIFF files. This means that they can be explored in non-GIS software, e.g. with simple picture viewers, which is essential when a larger community of non-specialists needs to be considered, e.g. in public collaborative projects. The tool recognizes all frequently used single band raster formats and supports elevation raster file data conversion.
MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation
NASA Astrophysics Data System (ADS)
Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo
2016-07-01
In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.
Orbiter/External Tank Mate 3-D Solid Modeling
NASA Technical Reports Server (NTRS)
Godfrey, G. S.; Brandt, B.; Rorden, D.; Kapr, F.
2004-01-01
This research and development project presents an overview of the work completed while attending a summer 2004 American Society of Engineering Education/National Aeronautics and Space Administration (ASEE/NASA) Faculty Fellowship. This fellowship was completed at the Kennedy Space Center, Florida. The scope of the project was to complete parts, assemblies, and drawings that could be used by Ground Support Equipment (GSE) personnel to simulate situations and scenarios commonplace to the space shuttle Orbiter/External Tank (ET) Mate (50004). This mate takes place in the Vehicle Assembly Building (VAB). These simulations could then be used by NASA engineers as decision-making tools. During the summer of 2004, parts were created that defined the Orbiter/ET structural interfaces. Emphasis was placed upon assemblies that included the Orbiter/ET forward attachment (EO-1), aft left thrust strut (EO-2), aft right tripod support structure (EO-3), and crossbeam and aft feedline/umbilical supports. These assemblies are used to attach the Orbiter to the ET. The Orbiter/ET Mate assembly was then used to compare and analyze clearance distances using different Orbiter hang angles. It was found that a 30-minute arc angle change in Orbiter hang angle affected distance at the bipod strut to Orbiter yoke fitting 8.11 inches. A 3-D solid model library was established as a result of this project. This library contains parts, assemblies, and drawings translated into several formats. This library contains a collection of the following files: sti for sterolithography, stp for neutral file work, shrinkwrap for compression. tiff for photoshop work, jpeg for Internet use, and prt and asm for Pro/Engineer use. This library was made available to NASA engineers so that they could access its contents to make angle, load, and clearance analysis studies. These decision-making tools may be used by Pro/Engineer users and non-users.
Jackson, Charlene R; Fedorka-Cray, Paula J; Wineland, Nora; Tankson, Jeanetta D; Barrett, John B; Douris, Aphrodite; Gresham, Cheryl P; Jackson-Hall, Carolina; McGlinchey, Beth M; Price, Maria Victoria
2007-01-01
In 2003 the United States Department of Agriculture established USDA VetNet. It was modeled after PulseNet USA, the national molecular subtyping network for foodborne disease surveillance. The objectives of USDA VetNet are: to use pulsed-field gel electrophoresis (PFGE) to subtype zoonotic pathogens submitted to the animal arm of the National Antimicrobial Resistance Monitoring System (NARMS); examine VetNet and PulseNet PFGE patterns; and use the data for surveillance and investigation of suspected foodborne illness outbreaks. Whereas PulseNet subtypes 7 foodborne disease-causing bacteria- Escherichia coli O157:H7, Salmonella, Shigella, Listeria monocytogenes, Campylobacter, Yersinia pestis, and Vibrio cholerae-VetNet at present subtypes nontyphoidal Salmonella serotypes and Campylobacter from animals, including diagnostic specimens, healthy farm animals, and carcasses of food-producing animals at slaughter. By the end of 2005, VetNet had two functioning databases: the NARMS Salmonella and the NARMS Campylobacter databases. The Salmonella database contained 6763 Salmonella isolates and 2514 unique XbaI patterns, while the Campylobacter database contained 58 Campylobacter isolates and 53 unique SmaI patterns. Both databases contain the PFGE tagged image file format (TIFF) images, demographic information, and the antimicrobial resistance profiles assigned by NARMS. In the future, veterinary diagnostic laboratories will be invited to participate in VetNet. The establishment of USDA VetNet enhances the mission of the agriculture and public health communities in the surveillance and investigation of foodborne illness outbreaks.
Khushi, Matloob; Edwards, Georgina; de Marcos, Diego Alonso; Carpenter, Jane E; Graham, J Dinny; Clarke, Christine L
2013-02-12
Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient's clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934.
NASA Astrophysics Data System (ADS)
Reilly, B. T.; Stoner, J. S.; Wiest, J.
2017-08-01
Computed tomography (CT) of sediment cores allows for high-resolution images, three-dimensional volumes, and down core profiles. These quantitative data are generated through the attenuation of X-rays, which are sensitive to sediment density and atomic number, and are stored in pixels as relative gray scale values or Hounsfield units (HU). We present a suite of MATLAB™ tools specifically designed for routine sediment core analysis as a means to standardize and better quantify the products of CT data collected on medical CT scanners. SedCT uses a graphical interface to process Digital Imaging and Communications in Medicine (DICOM) files, stitch overlapping scanned intervals, and create down core HU profiles in a manner robust to normal coring imperfections. Utilizing a random sampling technique, SedCT reduces data size and allows for quick processing on typical laptop computers. SedCTimage uses a graphical interface to create quality tiff files of CT slices that are scaled to a user-defined HU range, preserving the quantitative nature of CT images and easily allowing for comparison between sediment cores with different HU means and variance. These tools are presented along with examples from lacustrine and marine sediment cores to highlight the robustness and quantitative nature of this method.
The survey on data format of Earth observation satellite data at JAXA.
NASA Astrophysics Data System (ADS)
Matsunaga, M.; Ikehata, Y.
2017-12-01
JAXA's earth observation satellite data are distributed by a portal web site for search and deliver called "G-Portal". Users can download the satellite data of GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 from G-Portal. However, these data formats are different by each satellite like HDF4, HDF5, NetCDF4, CEOS, etc., and which formats are not familiar to new data users. Although the HDF type self-describing format is very convenient and useful for big dataset information, old-type format product is not readable by open GIS tool nor apply OGC standard. Recently, the satellite data are widely used to be applied to the various needs such as disaster, earth resources, monitoring the global environment, Geographic Information System(GIS) and so on. In order to remove a barrier of using Earth Satellite data for new community users, JAXA has been providing the format-converted product like GeoTIFF or KMZ. In addition, JAXA provides format conversion tool itself. We investigate the trend of data format for data archive, data dissemination and data utilization, then we study how to improve the current product format for various application field users and make a recommendation for new product.
76 FR 43679 - Filing via the Internet; Notice of Additional File Formats for efiling
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-21
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM07-16-000] Filing via the Internet; Notice of Additional File Formats for efiling Take notice that the Commission has added to its list of acceptable file formats the four-character file extensions for Microsoft Office 2007/2010...
Lava and Snow on Klyuchevskaya Volcano [high res
2013-09-20
IDL TIFF file This false-color (shortwave infrared, near infrared, green) satellite image reveals an active lava flow on the western slopes of Klyuchevskaya Volcano. Klyuchevskaya is one of several active volcanoes on the Kamchatka Peninsula in far eastern Russia. The lava flow itself is bright red. Snow on Klyuchevskaya and nearby mountains is cyan, while bare ground and volcanic debris is gray or brown. Vegetation is green. The image was collected by Landsat 8 on September 9, 2013. NASA Earth Observatory image by Jesse Allen and Robert Simmon, using Instrument: Landsat 8 - OLI More info: 1.usa.gov/1evspH7 NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
... HEADS UP Resources Training Custom PDFs Mobile Apps Videos Graphics Podcasts Social Media File Formats Help: How do I view different file formats (PDF, DOC, PPT, MPEG) on this site? Adobe PDF file Microsoft PowerPoint ... file Apple Quicktime file RealPlayer file Text file ...
Enhancements to TauDEM to support Rapid Watershed Delineation Services
NASA Astrophysics Data System (ADS)
Sazib, N. S.; Tarboton, D. G.
2015-12-01
Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.
NASA Astrophysics Data System (ADS)
Saco, P. M.; Moreno de las Heras, M.; Willgoose, G. R.
2014-12-01
Watersheds are widely recognized as the basic functional unit for water resources management studies and are important for a variety of problems in hydrology, ecology, and geomorphology. Nevertheless, delineating a watershed spread across a large region is still cumbersome due to the processing burden of working with large Digital Elevation Model. Terrain Analysis Using Digital Elevation Models (TauDEM) software supports the delineation of watersheds and stream networks from within desktop Geographic Information Systems. A rich set of watershed and stream network attributes are computed. However limitations of the TauDEM desktop tools are (1) it supports only one type of raster (tiff format) data (2) requires installation of software for parallel processing, and (3) data have to be in projected coordinate system. This paper presents enhancements to TauDEM that have been developed to extend its generality and support web based watershed delineation services. The enhancements of TauDEM include (1) reading and writing raster data with the open-source geospatial data abstraction library (GDAL) not limited to the tiff data format and (2) support for both geographic and projected coordinates. To support web services for rapid watershed delineation a procedure has been developed for sub setting the domain based on sub-catchments, with preprocessed data prepared for each catchment stored. This allows the watershed delineation to function locally, while extending to the full extent of watersheds using preprocessed information. Additional capabilities of this program includes computation of average watershed properties and geomorphic and channel network variables such as drainage density, shape factor, relief ratio and stream ordering. The updated version of TauDEM increases the practical applicability of it in terms of raster data type, size and coordinate system. The watershed delineation web service functionality is useful for web based software as service deployments that alleviate the need for users to install and work with desktop GIS software.
Chao, Tian-Jy; Kim, Younghun
2015-02-03
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function to convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Tian-Jy; Kim, Younghun
Automatically translating a building architecture file format (Industry Foundation Class) to a simulation file, in one aspect, may extract data and metadata used by a target simulation tool from a building architecture file. Interoperability data objects may be created and the extracted data is stored in the interoperability data objects. A model translation procedure may be prepared to identify a mapping from a Model View Definition to a translation and transformation function. The extracted data may be transformed using the data stored in the interoperability data objects, an input Model View Definition template, and the translation and transformation function tomore » convert the extracted data to correct geometric values needed for a target simulation file format used by the target simulation tool. The simulation file in the target simulation file format may be generated.« less
Datacube Interoperability, Encoding Independence, and Analytics
NASA Astrophysics Data System (ADS)
Baumann, Peter; Hirschorn, Eric; Maso, Joan
2017-04-01
Datacubes are commonly accepted as an enabling paradigm which provides a handy abstraction for accessing and analyzing the zillions of image files delivered by the manifold satellite instruments and climate simulations, among others. Additionally, datacubes are the classic model for statistical and OLAP datacubes, so a further information category can be integrated. From a standards perspective, spatio-temporal datacubes naturally are included in the concept of coverages which encompass regular and irregular grids, point clouds, and general meshes - or, more abstractly, digital representations of spatio-temporally varying phenomena. ISO 19123, which is identical to OGC Abstract Topic 6, gives a high-level abstract definition which is complemented by the OGC Coverage Implementation Schema (CIS) which is an interoperable, yet format independent concretization of the abstract model. Currently, ISO is working on adopting OGC CIS as ISO 19123-2; the existing ISO 19123 standard is under revision by one of the abstract authors and will become ISO 19123-1. The roadmap agreed by ISO further foresees adoption of the OGC Web Coverage Service (WCS) as an ISO standard so that a complete data and service model will exist. In 2016, INSPIRE has adopted WCS as Coverage Download Service, including the datacube analytics language Web Coverage Processing Service (WCPS). The rasdaman technology (www.rasdaman.org) is both OGC and INSPIRE Reference Implementation. In the global EarthServer initiative rasdaman database sizes are exceeding 250 TB today, heading for the Petabyte frontier well in 2017. Technically, CIS defines a compact, efficient model for representing multi-dimensional datacubes in several ways. The classical coverage cube defines a domain set (where are values?), a range set (what are these values?), and range type (what do the values mean?), as well as a "bag" for arbitrary metadata. With CIS 1.1, coordinate/value pair sequences have been added, as well as tiled representations. Further, CIS 1.1 offers a unified model for any kind of regular and irregular grids, also allowing sensor models as per SensorML. Encodings include ASCII formats like GML, JSON, RDF as well as binary formats like GeoTIFF, NetCDF, JPEG2000, and GRIB2; further, a container concept allows mixed representations within one coverage file utilizing zip or other convenient package formats. Through the tight integration with the Sensor Web Enablement (SWE), a lossless "transport" from sensor into coverage world is ensured. The corresponding service model of WCS supports datacube operations ranging from simple data extraction to complex ad-hoc analytics with WPCS. Notably, W3C is working has set out on a coverage model as well; it has been designed relatively independently from the abovementioned standards, but there is informal agreement to link it into the CIS universe (which allows for different, yet interchangeable representations). Particularly interesting in the W3C proposal is the detailed semantic modeling of metadata; as CIS 1.1 supports RDF, a tight coupling seems feasible.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Horn, J.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Joyce, M.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2017-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
The Snow Data System at NASA JPL
NASA Astrophysics Data System (ADS)
Joyce, M.; Laidlaw, R.; Painter, T. H.; Bormann, K. J.; Rittger, K.; Brodzik, M. J.; Skiles, M.; Burgess, A. B.; Mattmann, C. A.; Ramirez, P.; Goodale, C. E.; McGibbney, L. J.; Zimdars, P.; Yaghoobi, R.
2016-12-01
The Snow Data System at NASA JPL includes data processing pipelines built with open source software, Apache 'Object Oriented Data Technology' (OODT). Processing is carried out in parallel across a high-powered computing cluster. The pipelines use input data from satellites such as MODIS, VIIRS and Landsat. They apply algorithms to the input data to produce a variety of outputs in GeoTIFF format. These outputs include daily data for SCAG (Snow Cover And Grain size) and DRFS (Dust Radiative Forcing in Snow), along with 8-day composites and MODICE annual minimum snow and ice calculations. This poster will describe the Snow Data System, its outputs and their uses and applications. It will also highlight recent advancements to the system and plans for the future.
2013-01-01
Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thoreson, Gregory G
PCF files are binary files designed to contain gamma spectra and neutron count rates from radiation sensors. It is the native format for the GAmma Detector Response and Analysis Software (GADRAS) package [1]. It can contain multiple spectra and information about each spectrum such as energy calibration. This document outlines the format of the file that would allow one to write a computer program to parse and write such files.
Data files from the Grays Harbor Sediment Transport Experiment Spring 2001
Landerman, Laura A.; Sherwood, Christopher R.; Gelfenbaum, Guy; Lacy, Jessica; Ruggiero, Peter; Wilson, Douglas; Chisholm, Tom; Kurrus, Keith
2005-01-01
This publication consists of two DVD-ROMs, both of which are presented here. This report describes data collected during the Spring 2001 Grays Harbor Sediment Transport Experiment, and provides additional information needed to interpret the data. Two DVDs accompany this report; both contain documentation in html format that assist the user in navigating through the data. DVD-ROM-1 contains a digital version of this report in .pdf format, raw Aquatec acoustic backscatter (ABS) data in .zip format, Sonar data files in .avi format, and coastal processes and morphology data in ASCII format. ASCII data files are provided in .zip format; bundled coastal processes ASCII files are separated by deployment and instrument; bundled morphology ASCII files are separated into monthly data collection efforts containing the beach profiles collected (or extracted from the surface map) at that time; weekly surface maps are also bundled together. DVD-ROM-2 contains a digital version of this report in .pdf format, the binary data files collected by the SonTek instrumentation, calibration files for the pressure sensors, and Matlab m-files for loading the ABS data into Matlab and cleaning-up the optical backscatter (OBS) burst time-series data.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, Terrence D.
2017-04-01
This report specifies the electronic file format that was agreed upon to be used as the file format for normalized radiological data produced by the software tool developed under this TI project. The NA-84 Technology Integration (TI) Program project (SNL17-CM-635, Normalizing Radiological Data for Analysis and Integration into Models) investigators held a teleconference on December 7, 2017 to discuss the tasks to be completed under the TI program project. During this teleconference, the TI project investigators determined that the comma-separated values (CSV) file format is the most suitable file format for the normalized radiological data that will be outputted frommore » the normalizing tool developed under this TI project. The CSV file format was selected because it provides the requisite flexibility to manage different types of radiological data (i.e., activity concentration, exposure rate, dose rate) from other sources [e.g., Radiological Assessment and Monitoring System (RAMS), Aerial Measuring System (AMS), Monitoring and Sampling). The CSV file format also is suitable for the file format of the normalized radiological data because this normalized data can then be ingested by other software [e.g., RAMS, Visual Sampling Plan (VSP)] used by the NA-84’s Consequence Management Program.« less
77 FR 59692 - 2014 Diversity Immigrant Visa Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-28
... the E-DV system. The entry will not be accepted and must be resubmitted. Group or family photographs... must be in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum file size...). Image File Format: The image must be in the Joint Photographic Experts Group (JPEG) format. Image File...
An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files
Chan, Anthony; Gropp, William; Lusk, Ewing
2008-01-01
A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less
Carle, S.F.; Glen, J.M.; Langenheim, V.E.; Smith, R.B.; Oliver, H.W.
1990-01-01
The report presents the principal facts for gravity stations compiled for Yellowstone National Park and vicinity. The gravity data were compiled from three sources: Defense Mapping Agency, University of Utah, and U.S. Geological Survey. Part A of the report is a paper copy describing how the compilation was done and presenting the data in tabular format as well as a map; part B is a 5-1/4 inch floppy diskette containing only the data files in ASCII format. Requirements for part B: IBM PC or compatible, DOS v. 2.0 or higher. Files contained on this diskette: DOD.ISO -- File containing the principal facts of the 514 gravity stations obtained from the Defense Mapping Agency. The data are in Plouff format* (see file PFTAB.TEX). UTAH.ISO -- File containing the principal facts of 153 gravity stations obtained from the University of Utah. Data are in Plouff format. USGS.ISO -- File containing the principal facts of 27 gravity stations collected by the U.S. Geological Survey in July 1987. Data are in Plouff format. PFTAB.TXT -- File containing explanation of principal fact format. ACC.TXT -- File containing explanation of accuracy codes.
NASA Astrophysics Data System (ADS)
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Mapping DICOM to OpenDocument format
NASA Astrophysics Data System (ADS)
Yu, Cong; Yao, Zhihong
2009-02-01
In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.
18 CFR 50.3 - Applications/pre-filing; rules and format.
Code of Federal Regulations, 2010 CFR
2010-04-01
... filings must be signed in compliance with § 385.2005 of this chapter. (e) The Commission will conduct a... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Applications/pre-filing... INTERSTATE ELECTRIC TRANSMISSION FACILITIES § 50.3 Applications/pre-filing; rules and format. (a) Filings are...
Manual for Getdata Version 3.1: a FORTRAN Utility Program for Time History Data
NASA Technical Reports Server (NTRS)
Maine, Richard E.
1987-01-01
This report documents version 3.1 of the GetData computer program. GetData is a utility program for manipulating files of time history data, i.e., data giving the values of parameters as functions of time. The most fundamental capability of GetData is extracting selected signals and time segments from an input file and writing the selected data to an output file. Other capabilities include converting file formats, merging data from several input files, time skewing, interpolating to common output times, and generating calculated output signals as functions of the input signals. This report also documents the interface standards for the subroutines used by GetData to read and write the time history files. All interface to the data files is through these subroutines, keeping the main body of GetData independent of the precise details of the file formats. Different file formats can be supported by changes restricted to these subroutines. Other computer programs conforming to the interface standards can call the same subroutines to read and write files in compatible formats.
Arkansas and Louisiana Aeromagnetic and Gravity Maps and Data - A Website for Distribution of Data
Bankey, Viki; Daniels, David L.
2008-01-01
This report contains digital data, image files, and text files describing data formats for aeromagnetic and gravity data used to compile the State aeromagnetic and gravity maps of Arkansas and Louisiana. The digital files include grids, images, ArcInfo, and Geosoft compatible files. In some of the data folders, ASCII files with the extension 'txt' describe the format and contents of the data files. Read the 'txt' files before using the data files.
A mass spectrometry proteomics data management platform.
Sharma, Vagisha; Eng, Jimmy K; Maccoss, Michael J; Riffle, Michael
2012-09-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are "organically" distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/.
Mass spectrometer output file format mzML.
Deutsch, Eric W
2010-01-01
Mass spectrometry is an important technique for analyzing proteins and other biomolecular compounds in biological samples. Each of the vendors of these mass spectrometers uses a different proprietary binary output file format, which has hindered data sharing and the development of open source software for downstream analysis. The solution has been to develop, with the full participation of academic researchers as well as software and hardware vendors, an open XML-based format for encoding mass spectrometer output files, and then to write software to use this format for archiving, sharing, and processing. This chapter presents the various components and information available for this format, mzML. In addition to the XML schema that defines the file structure, a controlled vocabulary provides clear terms and definitions for the spectral metadata, and a semantic validation rules mapping file allows the mzML semantic validator to insure that an mzML document complies with one of several levels of requirements. Complete documentation and example files insure that the format may be uniformly implemented. At the time of release, there already existed several implementations of the format and vendors have committed to supporting the format in their products.
Water and Streambed-Sediment Quality in the Upper Elk River Basin, Missouri and Arkansas, 2004-06
Smith, Brenda J.; Richards, Joseph M.; Schumacher, John G.
2007-01-01
The U.S. Geological Survey, in cooperation with the Missouri Department of Natural Resources, collected water and streambedsediment samples in the Upper Elk River Basin in southwestern Missouri and northwestern Arkansas from October 2004 through December 2006. The samples were collected to determine the stream-water quality and streambed-sediment quality. In 1998, the Missouri Department of Natural Resources included a 21.5-mile river reach of the Elk River on the 303(d) list of impaired waters in Missouri as required by Section 303(d) of the Federal Clean Water Act. The Elk River is on the 303(d) list for excess nutrient loading. The total phosphorus distribution by decade indicates that the concentrations since 2000 have increased significantly from those in the 1960s, 1980s, and 1990s. The nitrate as nitrogen (nitrate) concentrations also have increased significantly in post-1985 from pre-1985 samples collected at the Elk River near Tiff City. Concentrations have increased significantly since the 1960s. Concentrations in the 1970s and 1980s, though similar, have increased from those in the 1960s, and the concentrations from the 1990s and 2000s increased still more. Nitrate concentrations significantly increased in samples that were collected during large discharges (greater than 355 cubic feet per second) from the Elk River near Tiff City. Nitrate concentrations were largest in Indian Creek. Several sources of nitrate are present in the basin, including poultry facilities in the upper part of the basin, effluent inflow from communities of Anderson and Lanagan, land-applied animal waste, chemical fertilizer, and possible leaking septic systems. Total phosphorus concentrations were largest in Little Sugar Creek. The median concentration of total phosphorus from samples from Little Sugar Creek near Pineville was almost four times the median concentration in samples from the Elk River near Tiff City. Median concentrations of nutrient species were greater in the stormwater samples than the median concentrations in the ambient samples. Nitrate concentrations in stormwater samples ranged from 133 to 179 percent of the concentration in the ambient samples. The total phosphorus concentrations in the stormwater samples ranged from about 200 to more than 600 percent of the concentration in the ambient samples. Base-flow conditions as reflected by the seepage run of the summer of 2006 indicate that 52 percent of the discharge at the Elk River near Tiff City is contributed by Indian Creek. Little Sugar Creek contributes 32 percent and Big Sugar Creek 9 percent of the discharge in the Elk River near Tiff City. Only about 7 percent of the discharge at Tiff City comes from the mainstem of the Elk River. Concentrations of dissolved ammonia plus organic nitrogen as nitrogen, dissolved ammonia as nitrogen, dissolved phosphorus, and dissolved orthophosphorus were detected in all streambed-sediment leachate samples. Concentrations of leachable nutrients in streambed-sediment samples generally tended to be slightly larger along the major forks of the Elk River as compared to tributary sites, with sites in the upper reaches of the major forks having among the largest concentrations. Concentrations of leachable nutrients in the major forks generally decreased with increasing distance downstream.
Qian, Li Jun; Zhou, Mi; Xu, Jian Rong
2008-07-01
The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-23
... recommends not more than 32 characters). DO NOT convert Word files or Excel files into PDF format. Converting... not allow HUD to enter data from the Excel files into a database. DO NOT save your logic model in .xlsm format. If necessary save as an Excel 97-2003 .xls format. Using the .xlsm format can result in a...
Boleneus, David E.; Appelgate, Larry M.; Joseph, Nancy L.; Brandt, Theodore R.
2001-01-01
Geologic maps of the western part of the Belt Basin of western Montana and northern Idaho were converted into digital raster (TIFF image) format to facilitate their manipulation in geographic information systems. The 85-mile x 100-mile map area mostly contains rocks belonging to the lower and middle Belt Supergroup. The area is of interest as these Middle Proterozoic strata contain vein-type lead-zinc-silver deposits in the Coeur d?Alene Mining District in the St. Regis and Revett formations and strata-bound copper-silver deposits, such as the Troy mine, within the Revett Formation. The Prichard Formation is also prospective for strata-bound lead-zinc deposits because equivalent Belt strata in southern British Columbia, Canada host the Sullivan lead-zinc deposit. Map data converted to digital images include 13 geological maps at scales ranging from 1:48,000 to 1:12,000. Geologic map images produced from these maps by color scanning were registered to grid tick coverages in a Universal Transverse Mercator (North American Datum of 1927, zone 11) projection using ArcView Image Analysis. Geo-registering errors vary from 10 ft to 114 ft.
Rothkirch, André; Gatta, G Diego; Meyer, Mathias; Merkel, Sébastien; Merlini, Marco; Liermann, Hanns Peter
2013-09-01
Fast detectors employed at third-generation synchrotrons have reduced collection times significantly and require the optimization of commercial as well as customized software packages for data reduction and analysis. In this paper a procedure to collect, process and analyze single-crystal data sets collected at high pressure at the Extreme Conditions beamline (P02.2) at PETRA III, DESY, is presented. A new data image format called `Esperanto' is introduced that is supported by the commercial software package CrysAlis(Pro) (Agilent Technologies UK Ltd). The new format acts as a vehicle to transform the most common area-detector data formats via a translator software. Such a conversion tool has been developed and converts tiff data collected on a Perkin Elmer detector, as well as data collected on a MAR345/555, to be imported into the CrysAlis(Pro) software. In order to demonstrate the validity of the new approach, a complete structure refinement of boron-mullite (Al5BO9) collected at a pressure of 19.4 (2) GPa is presented. Details pertaining to the data collections and refinements of B-mullite are presented.
A Mass Spectrometry Proteomics Data Management Platform*
Sharma, Vagisha; Eng, Jimmy K.; MacCoss, Michael J.; Riffle, Michael
2012-01-01
Mass spectrometry-based proteomics is increasingly being used in biomedical research. These experiments typically generate a large volume of highly complex data, and the volume and complexity are only increasing with time. There exist many software pipelines for analyzing these data (each typically with its own file formats), and as technology improves, these file formats change and new formats are developed. Files produced from these myriad software programs may accumulate on hard disks or tape drives over time, with older files being rendered progressively more obsolete and unusable with each successive technical advancement and data format change. Although initiatives exist to standardize the file formats used in proteomics, they do not address the core failings of a file-based data management system: (1) files are typically poorly annotated experimentally, (2) files are “organically” distributed across laboratory file systems in an ad hoc manner, (3) files formats become obsolete, and (4) searching the data and comparing and contrasting results across separate experiments is very inefficient (if possible at all). Here we present a relational database architecture and accompanying web application dubbed Mass Spectrometry Data Platform that is designed to address the failings of the file-based mass spectrometry data management approach. The database is designed such that the output of disparate software pipelines may be imported into a core set of unified tables, with these core tables being extended to support data generated by specific pipelines. Because the data are unified, they may be queried, viewed, and compared across multiple experiments using a common web interface. Mass Spectrometry Data Platform is open source and freely available at http://code.google.com/p/msdapl/. PMID:22611296
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
12 CFR 335.801 - Inapplicable SEC regulations; FDIC substituted regulations; additional information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... a continuing hardship exemption under these rules may file the forms with the FDIC in paper format... these rules may file the appropriate forms with the FDIC in paper format. Instructions for continuing...) Previously filed exhibits, whether in paper or electronic format, may be incorporated by reference into an...
NASA Astrophysics Data System (ADS)
Eberle, J.; Gerlach, R.; Hese, S.; Schmullius, C.
2012-04-01
To provide earth observation products in the area of Siberia, the Siberian Earth System Science Cluster (SIB-ESS-C) was established as a spatial data infrastructure at the University of Jena (Germany), Department for Earth Observation. This spatial data infrastructure implements standards published by the Open Geospatial Consortium (OGC) and the International Organizsation for Standardization (ISO) for data discovery, data access, data processing and data analysis. The objective of SIB-ESS-C is to faciliate environmental research and Earth system science in Siberia. The region for this project covers the entire Asian part of the Russian Federation approximately between 58°E - 170°W and 48°N - 80°N. To provide discovery, access and analysis services a webportal was published for searching and visualisation of available data. This webportal is based on current web technologies like AJAX, Drupal Content Management System as backend software and a user-friendly surface with Drag-n-Drop and further mouse events. To have a wide range of regular updated earth observation products, some products from sensor MODIS at the satellites Aqua and Terra were processed. A direct connection to NASA archive servers makes it possible to download MODIS Level 3 and 4 products and integrate it in the SIB-ESS-C infrastructure. These data can be downloaded in a file format called Hierarchical Data Format (HDF). For visualisation and further analysis, this data is reprojected, converted to GeoTIFF and global products clipped to the project area. All these steps are implemented as an automatic process chain. If new MODIS data is available within the infrastructure this process chain is executed. With the link to a MODIS catalogue system, the system gets new data daily. With the implemented analysis processes, timeseries data can be analysed, for example to plot a trend or different time series against one another. Scientists working in this area and working with MODIS data can make use of this service over the webportal. Both searching manually the NASA archive for MODIS data, processing these data automatically and then download it for further processing and using the regular updated products.
using remotely sensed satellite data and products to access land cover change for local to global Reports * IGOL * Landsat GeoCover * SRTM DEM GeoTIFFs * Rapid Response ` News Tree Canopy Cover Version 4
Transferable Output ASCII Data (TOAD) gateway: Version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Bingel, Bradford D.
1991-01-01
The Transferable Output ASCII Data (TOAD) Gateway, release 1.0 is described. This is a software tool for converting tabular data from one format into another via the TOAD format. This initial release of the Gateway allows free data interchange among the following file formats: TOAD; Standard Interface File (SIF); Program to Optimize Simulated Trajectories (POST) input; Comma Separated Value (TSV); and a general free-form file format. As required, additional formats can be accommodated quickly and easily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.
2016-04-01
The format of the TSUNAMI-A sensitivity data file produced by SAMS for cases with deterministic transport solutions is given in Table 6.3.A.1. The occurrence of each entry in the data file is followed by an identification of the data contained on each line of the file and the FORTRAN edit descriptor denoting the format of each line. A brief description of each line is also presented. A sample of the TSUNAMI-A data file for the Flattop-25 sample problem is provided in Figure 6.3.A.1. Here, only two profiles out of the 130 computed are shown.
NASA Technical Reports Server (NTRS)
Bingle, Bradford D.; Shea, Anne L.; Hofler, Alicia S.
1993-01-01
Transferable Output ASCII Data (TOAD) computer program (LAR-13755), implements format designed to facilitate transfer of data across communication networks and dissimilar host computer systems. Any data file conforming to TOAD format standard called TOAD file. TOAD Editor is interactive software tool for manipulating contents of TOAD files. Commonly used to extract filtered subsets of data for visualization of results of computation. Also offers such user-oriented features as on-line help, clear English error messages, startup file, macroinstructions defined by user, command history, user variables, UNDO features, and full complement of mathematical statistical, and conversion functions. Companion program, TOAD Gateway (LAR-14484), converts data files from variety of other file formats to that of TOAD. TOAD Editor written in FORTRAN 77.
Sanford, Jordan M.; Harrison, Arnell S.; Wiese, Dana S.; Flocks, James G.
2008-01-01
In June and August of 1992, the U.S. Geological Survey (USGS) conducted geophysical surveys to investigate the shallow geologic framework from Lake Pontchartrain, Louisiana, to Mobile Bay, Alabama. This work was conducted onboard the Argonne National Laboratory's R/V ERDA-1 as part of the Mississippi/Alabama Pollution Project. This report is part of a series to digitally archive the legacy analog data collected from the Mississippi-Alabama SHelf (MASH). The MASH data rescue project is a cooperative effort by the USGS and the Minerals Management Service (MMS). A standardized naming convention was established to allow for better management of scanned trackline images within the MASH data rescue project. Each cruise received a unique field activity ID based on the year the data were collected, the first two digits of the survey vessel name, and the number of cruises made (to date) by that vessel that year (i.e. 92ER2 represents the second cruise made by the R/V ERDA-1 in 1992.) The new field activity IDs 92ER2 and 92ER4 presented in this report were originally referred to as ERDA 92-2 and ERDA 92-4 at the USGS in St. Petersburg, FL, and 92010 and 92037 at the USGS in Woods Hole, MA. A table showing the naming convention lineage for cruise IDs in the MASH data rescue series is included as a PDF. This report serves as an archive of high resolution scanned Tagged Image File Format (TIFF) and Graphics Interchange Format (GIF) images of the original boomer paper records, navigation files, trackline maps, Geographic Information System (GIS) files, cruise logs, and formal Federal Geographic Data Committee (FGDC) metadata for cruises 92ER2 and 92ER4. The boomer system uses an acoustic energy source called a plate, which consists of capacitors charged to a high voltage and discharged through a transducer in the water. The source is towed on a sled, at sea level, and when discharged emits a short acoustic pulse, or shot, which propagates through the water and sediment column. The acoustic energy is reflected at density boundaries (such as the seafloor or sediment layers beneath the seafloor), detected by the hydrophone receiver, and the amplitude of the reflected energy is recorded by an Edward P. Curley Lab (EPC) thermal plotter. This process is repeated at timed intervals (for example, 0.5 s) and recorded for specific intervals of time (for example, 100 ms). The timed intervals are also referred to as the shot interval or fire rate. On analog records, the recorded interval is referred to as the sweep, which is the amount of time the recorder stylus takes to sweep from the top of the record to the bottom of the record, thereby recording the amplitude of the reflected energy of one shot. In this way, consecutive recorded shots produce a two-dimensional (2-D) vertical image of the shallow geologic structure beneath the ship track. Many of the geophysical data collected by the USGS prior to the late 1990s were recorded in analog format and stored as paper copies. Scientists onboard made hand-written annotations onto these records to note latitude and longitude, time, line number, course heading, and geographic points of reference. Each paper roll typically contained numerous survey lines and could reach more than 90 ft in length. All rolls are stored at the USGS FISC-St. Petersburg, FL. To preserve the integrity of these records and improve accessibility, analog holdings were converted to digital files.
78 FR 17233 - Notice of Opportunity To File Amicus Briefs
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-20
.... Any commonly-used word processing format or PDF format is acceptable; text formats are preferable to image formats. Briefs may also be filed with the Office of the Clerk of the Board, Merit Systems...
Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System
NASA Technical Reports Server (NTRS)
Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.
2008-01-01
In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.
SEGY to ASCII Conversion and Plotting Program 2.0
Goldman, Mark R.
2005-01-01
INTRODUCTION SEGY has long been a standard format for storing seismic data and header information. Almost every seismic processing package can read and write seismic data in SEGY format. In the data processing world, however, ASCII format is the 'universal' standard format. Very few general-purpose plotting or computation programs will accept data in SEGY format. The software presented in this report, referred to as SEGY to ASCII (SAC), converts seismic data written in SEGY format (Barry et al., 1975) to an ASCII data file, and then creates a postscript file of the seismic data using a general plotting package (GMT, Wessel and Smith, 1995). The resulting postscript file may be plotted by any standard postscript plotting program. There are two versions of SAC: one version for plotting a SEGY file that contains a single gather, such as a stacked CDP or migrated section, and a second version for plotting multiple gathers from a SEGY file containing more than one gather, such as a collection of shot gathers. Note that if a SEGY file has multiple gathers, then each gather must have the same number of traces per gather, and each trace must have the same sample interval and number of samples per trace. SAC will read several common standards of SEGY data, including SEGY files with sample values written in either IBM or IEEE floating-point format. In addition, utility programs are present to convert non-standard Seismic Unix (.sux) SEGY files and PASSCAL (.rsy) SEGY files to standard SEGY files. SAC allows complete user control over all plotting parameters including label size and font, tick mark intervals, trace scaling, and the inclusion of a title and descriptive text. SAC shell scripts create a postscript image of the seismic data in vector rather than bitmap format, using GMT's pswiggle command. Although this can produce a very large postscript file, the image quality is generally superior to that of a bitmap image, and commercial programs such as Adobe Illustrator? can manipulate the image more efficiently.
Tools for Requirements Management: A Comparison of Telelogic DOORS and the HiVe
2006-07-01
types DOORS deals with are text files, spreadsheets, FrameMaker , rich text, Microsoft Word and Microsoft Project. 2.5.1 Predefined file formats DOORS...during the export. DOORS exports FrameMaker files in an incomplete format, meaning DOORS exported files will have to be opened in FrameMaker and saved
76 FR 10405 - Federal Copyright Protection of Sound Recordings Fixed Before February 15, 1972
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... file in either the Adobe Portable Document File (PDF) format that contains searchable, accessible text (not an image); Microsoft Word; WordPerfect; Rich Text Format (RTF); or ASCII text file format (not a..., comments may be delivered in hard copy. If hand delivered by a private party, an original [[Page 10406...
The prevalence of encoded digital trace evidence in the nonfile space of computer media(,) (.).
Garfinkel, Simson L
2014-09-01
Forensically significant digital trace evidence that is frequently present in sectors of digital media not associated with allocated or deleted files. Modern digital forensic tools generally do not decompress such data unless a specific file with a recognized file type is first identified, potentially resulting in missed evidence. Email addresses are encoded differently for different file formats. As a result, trace evidence can be categorized as Plain in File (PF), Encoded in File (EF), Plain Not in File (PNF), or Encoded Not in File (ENF). The tool bulk_extractor finds all of these formats, but other forensic tools do not. A study of 961 storage devices purchased on the secondary market and shows that 474 contained encoded email addresses that were not in files (ENF). Different encoding formats are the result of different application programs that processed different kinds of digital trace evidence. Specific encoding formats explored include BASE64, GZIP, PDF, HIBER, and ZIP. Published 2014. This article is a U.S. Government work and is in the public domain in the USA. Journal of Forensic Sciences published by Wiley Periodicals, Inc. on behalf of American Academy of Forensic Sciences.
DOT National Transportation Integrated Search
2001-02-01
The Minnesota data system includes the following basic files: Accident data (Accident File, Vehicle File, Occupant File); Roadlog File; Reference Post File; Traffic File; Intersection File; Bridge (Structures) File; and RR Grade Crossing File. For ea...
PDB explorer -- a web based algorithm for protein annotation viewer and 3D visualization.
Nayarisseri, Anuraj; Shardiwal, Rakesh Kumar; Yadav, Mukesh; Kanungo, Neha; Singh, Pooja; Shah, Pratik; Ahmed, Sheaza
2014-12-01
The PDB file format, is a text format characterizing the three dimensional structures of macro molecules available in the Protein Data Bank (PDB). Determined protein structure are found in coalition with other molecules or ions such as nucleic acids, water, ions, Drug molecules and so on, which therefore can be described in the PDB format and have been deposited in PDB database. PDB is a machine generated file, it's not human readable format, to read this file we need any computational tool to understand it. The objective of our present study is to develop a free online software for retrieval, visualization and reading of annotation of a protein 3D structure which is available in PDB database. Main aim is to create PDB file in human readable format, i.e., the information in PDB file is converted in readable sentences. It displays all possible information from a PDB file including 3D structure of that file. Programming languages and scripting languages like Perl, CSS, Javascript, Ajax, and HTML have been used for the development of PDB Explorer. The PDB Explorer directly parses the PDB file, calling methods for parsed element secondary structure element, atoms, coordinates etc. PDB Explorer is freely available at http://www.pdbexplorer.eminentbio.com/home with no requirement of log-in.
NoSQL: collection document and cloud by using a dynamic web query form
NASA Astrophysics Data System (ADS)
Abdalla, Hemn B.; Lin, Jinzhao; Li, Guoquan
2015-07-01
Mongo-DB (from "humongous") is an open-source document database and the leading NoSQL database. A NoSQL (Not Only SQL, next generation databases, being non-relational, deal, open-source and horizontally scalable) presenting a mechanism for storage and retrieval of documents. Previously, we stored and retrieved the data using the SQL queries. Here, we use the MonogoDB that means we are not utilizing the MySQL and SQL queries. Directly importing the documents into our Drives, retrieving the documents on that drive by not applying the SQL queries, using the IO BufferReader and Writer, BufferReader for importing our type of document files to my folder (Drive). For retrieving the document files, the usage is BufferWriter from the particular folder (or) Drive. In this sense, providing the security for those storing files for what purpose means if we store the documents in our local folder means all or views that file and modified that file. So preventing that file, we are furnishing the security. The original document files will be changed to another format like in this paper; Binary format is used. Our documents will be converting to the binary format after that direct storing in one of our folder, that time the storage space will provide the private key for accessing that file. Wherever any user tries to discover the Document files means that file data are in the binary format, the document's file owner simply views that original format using that personal key from receive the secret key from the cloud.
The Design and Usage of the New Data Management Features in NASTRAN
NASA Technical Reports Server (NTRS)
Pamidi, P. R.; Brown, W. K.
1984-01-01
Two new data management features are installed in the April 1984 release of NASTRAN. These two features are the Rigid Format Data Base and the READFILE capability. The Rigid Format Data Base is stored on external files in card image format and can be easily maintained and expanded by the use of standard text editors. This data base provides the user and the NASTRAN maintenance contractor with an easy means for making changes to a Rigid Format or for generating new Rigid Formats without unnecessary compilations and link editing of NASTRAN. Each Rigid Format entry in the data base contains the Direct Matrix Abstraction Program (DMAP), along with the associated restart, DMAP sequence subset and substructure control flags. The READFILE capability allows an user to reference an external secondary file from the NASTRAN primary input file and to read data from this secondary file. There is no limit to the number of external secondary files that may be referenced and read.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sublet, J.-Ch.; Koning, A.J.; Forrest, R.A.
The reasons for the conversion of the European Activation File, EAF into ENDF-6 format are threefold. First, it significantly enhances the JEFF-3.0 release by the addition of an activation file. Second, to considerably increase its usage by using a recognized, official file format, allowing existing plug-in processes to be effective; and third, to move towards a universal nuclear data file in contrast to the current separate general and special-purpose files. The format chosen for the JEFF-3.0/A file uses reaction cross sections (MF-3), cross sections (MF-10), and multiplicities (MF-9). Having the data in ENDF-6 format allows the ENDF suite of utilitiesmore » and checker codes to be used alongside many other utility, visualizing, and processing codes. It is based on the EAF activation file used for many applications from fission to fusion, including dosimetry, inventories, depletion-transmutation, and geophysics. JEFF-3.0/A takes advantage of four generations of EAF files. Extensive benchmarking activities on these files provide feedback and validation with integral measurements. These, in parallel with a detailed graphical analysis based on EXFOR, have been applied stimulating new measurements, significantly increasing the quality of this activation file. The next step is to include the EAF uncertainty data for all channels into JEFF-3.0/A.« less
FRS Geospatial Return File Format
The Geospatial Return File Format describes format that needs to be used to submit latitude and longitude coordinates for use in Envirofacts mapping applications. These coordinates are stored in the Geospatail Reference Tables.
SEDIMENT DATA - COMMENCEMENT BAY HYLEBOS WATERWAY - TACOMA, WA - PRE-REMEDIAL DESIGN PROGRAM
Event 1A/1B Data Files URL address: http://www.epa.gov/r10earth/datalib/superfund/hybos1ab.htm. Sediment Chemistry Data (Database Format): HYBOS1AB.EXE is a self-extracting file which expands to the single-value per record .DBF format database file HYBOS1AB.DBF. This file contai...
76 FR 5431 - Released Rates of Motor Common Carriers of Household Goods
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... may be submitted either via the Board's e-filing format or in traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E- FILING link on the Board's website at http://www.stb.dot.gov . Any person submitting a filing in the traditional...
75 FR 52054 - Assessment of Mediation and Arbitration Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-24
...: Comments may be submitted either via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-01
... need to submit a photo for a child who is already a U.S. citizen or a Legal Permanent Resident. Group... Joint Photographic Experts Group (JPEG) format; it must have a maximum image file size of two hundred... (dpi); the image file format in Joint Photographic Experts Group (JPEG) format; the maximum image file...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-27
... already a U.S. citizen or a Lawful Permanent Resident, but you will not be penalized if you do. Group... specifications: Image File Format: The miage must be in the Joint Photographic Experts Group (JPEG) format. Image... in the Joint Photographic Experts Group (JPEG) format. Image File Size: The maximum image file size...
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-05
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-01-01
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406
Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...
2015-12-23
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale
NASA Astrophysics Data System (ADS)
Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason
2015-03-01
The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.
Lossless Data Embedding—New Paradigm in Digital Watermarking
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav; Du, Rui
2002-12-01
One common drawback of virtually all current data embedding methods is the fact that the original image is inevitably distorted due to data embedding itself. This distortion typically cannot be removed completely due to quantization, bit-replacement, or truncation at the grayscales 0 and 255. Although the distortion is often quite small and perceptual models are used to minimize its visibility, the distortion may not be acceptable for medical imagery (for legal reasons) or for military images inspected under nonstandard viewing conditions (after enhancement or extreme zoom). In this paper, we introduce a new paradigm for data embedding in images (lossless data embedding) that has the property that the distortion due to embedding can be completely removed from the watermarked image after the embedded data has been extracted. We present lossless embedding methods for the uncompressed formats (BMP, TIFF) and for the JPEG format. We also show how the concept of lossless data embedding can be used as a powerful tool to achieve a variety of nontrivial tasks, including lossless authentication using fragile watermarks, steganalysis of LSB embedding, and distortion-free robust watermarking.
Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.
Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory
2016-06-13
Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.
A Python library for FAIRer access and deposition to the Metabolomics Workbench Data Repository.
Smelter, Andrey; Moseley, Hunter N B
2018-01-01
The Metabolomics Workbench Data Repository is a public repository of mass spectrometry and nuclear magnetic resonance data and metadata derived from a wide variety of metabolomics studies. The data and metadata for each study is deposited, stored, and accessed via files in the domain-specific 'mwTab' flat file format. In order to improve the accessibility, reusability, and interoperability of the data and metadata stored in 'mwTab' formatted files, we implemented a Python library and package. This Python package, named 'mwtab', is a parser for the domain-specific 'mwTab' flat file format, which provides facilities for reading, accessing, and writing 'mwTab' formatted files. Furthermore, the package provides facilities to validate both the format and required metadata elements of a given 'mwTab' formatted file. In order to develop the 'mwtab' package we used the official 'mwTab' format specification. We used Git version control along with Python unit-testing framework as well as continuous integration service to run those tests on multiple versions of Python. Package documentation was developed using sphinx documentation generator. The 'mwtab' package provides both Python programmatic library interfaces and command-line interfaces for reading, writing, and validating 'mwTab' formatted files. Data and associated metadata are stored within Python dictionary- and list-based data structures, enabling straightforward, 'pythonic' access and manipulation of data and metadata. Also, the package provides facilities to convert 'mwTab' files into a JSON formatted equivalent, enabling easy reusability of the data by all modern programming languages that implement JSON parsers. The 'mwtab' package implements its metadata validation functionality based on a pre-defined JSON schema that can be easily specialized for specific types of metabolomics studies. The library also provides a command-line interface for interconversion between 'mwTab' and JSONized formats in raw text and a variety of compressed binary file formats. The 'mwtab' package is an easy-to-use Python package that provides FAIRer utilization of the Metabolomics Workbench Data Repository. The source code is freely available on GitHub and via the Python Package Index. Documentation includes a 'User Guide', 'Tutorial', and 'API Reference'. The GitHub repository also provides 'mwtab' package unit-tests via a continuous integration service.
Accelerating Malware Detection via a Graphics Processing Unit
2010-09-01
Processing Unit . . . . . . . . . . . . . . . . . . 4 PE Portable Executable . . . . . . . . . . . . . . . . . . . . . 4 COFF Common Object File Format...operating systems for the future [Szo05]. The PE format is an updated version of the common object file format ( COFF ) [Mic06]. Microsoft released a new...NAs02]. These alerts can be costly in terms of time and resources for individuals and organizations to investigate each misidentified file [YWL07] [Vak10
Smelter, Andrey; Astra, Morgan; Moseley, Hunter N B
2017-03-17
The Biological Magnetic Resonance Data Bank (BMRB) is a public repository of Nuclear Magnetic Resonance (NMR) spectroscopic data of biological macromolecules. It is an important resource for many researchers using NMR to study structural, biophysical, and biochemical properties of biological macromolecules. It is primarily maintained and accessed in a flat file ASCII format known as NMR-STAR. While the format is human readable, the size of most BMRB entries makes computer readability and explicit representation a practical requirement for almost any rigorous systematic analysis. To aid in the use of this public resource, we have developed a package called nmrstarlib in the popular open-source programming language Python. The nmrstarlib's implementation is very efficient, both in design and execution. The library has facilities for reading and writing both NMR-STAR version 2.1 and 3.1 formatted files, parsing them into usable Python dictionary- and list-based data structures, making access and manipulation of the experimental data very natural within Python programs (i.e. "saveframe" and "loop" records represented as individual Python dictionary data structures). Another major advantage of this design is that data stored in original NMR-STAR can be easily converted into its equivalent JavaScript Object Notation (JSON) format, a lightweight data interchange format, facilitating data access and manipulation using Python and any other programming language that implements a JSON parser/generator (i.e., all popular programming languages). We have also developed tools to visualize assigned chemical shift values and to convert between NMR-STAR and JSONized NMR-STAR formatted files. Full API Reference Documentation, User Guide and Tutorial with code examples are also available. We have tested this new library on all current BMRB entries: 100% of all entries are parsed without any errors for both NMR-STAR version 2.1 and version 3.1 formatted files. We also compared our software to three currently available Python libraries for parsing NMR-STAR formatted files: PyStarLib, NMRPyStar, and PyNMRSTAR. The nmrstarlib package is a simple, fast, and efficient library for accessing data from the BMRB. The library provides an intuitive dictionary-based interface with which Python programs can read, edit, and write NMR-STAR formatted files and their equivalent JSONized NMR-STAR files. The nmrstarlib package can be used as a library for accessing and manipulating data stored in NMR-STAR files and as a command-line tool to convert from NMR-STAR file format into its equivalent JSON file format and vice versa, and to visualize chemical shift values. Furthermore, the nmrstarlib implementation provides a guide for effectively JSONizing other older scientific formats, improving the FAIRness of data in these formats.
10 CFR 9.35 - Duplication fees.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...
10 CFR 9.35 - Duplication fees.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...
10 CFR 9.35 - Duplication fees.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) copying of ADAMS documents to paper (applies to images, OCR TIFF, and PDF text) is $0.30 per page. (B) EFT... is $0.30 per page. (vi) Priority rates (rush processing) are as follows: (A) The priority rate...
Digital geologic map of the Butler Peak 7.5' quadrangle, San Bernardino County, California
Miller, Fred K.; Matti, Jonathan C.; Brown, Howard J.; digital preparation by Cossette, P. M.
2000-01-01
Open-File Report 00-145, is a digital geologic map database of the Butler Peak 7.5' quadrangle that includes (1) ARC/INFO (Environmental Systems Research Institute) version 7.2.1 Patch 1 coverages, and associated tables, (2) a Portable Document Format (.pdf) file of the Description of Map Units, Correlation of Map Units chart, and an explanation of symbols used on the map, btlrpk_dcmu.pdf, (3) a Portable Document Format file of this Readme, btlrpk_rme.pdf (the Readme is also included as an ascii file in the data package), and (4) a PostScript plot file of the map, Correlation of Map Units, and Description of Map Units on a single sheet, btlrpk.ps. No paper map is included in the Open-File report, but the PostScript plot file (number 4 above) can be used to produce one. The PostScript plot file generates a map, peripheral text, and diagrams in the editorial format of USGS Geologic Investigation Series (I-series) maps.
MXA: a customizable HDF5-based data format for multi-dimensional data sets
NASA Astrophysics Data System (ADS)
Jackson, M.; Simmons, J. P.; De Graef, M.
2010-09-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.
The National Map - Orthoimagery
Mauck, James; Brown, Kim; Carswell, William J.
2009-01-01
Orthorectified digital aerial photographs and satellite images of 1-meter (m) pixel resolution or finer make up the orthoimagery component of The National Map. The process of orthorectification removes feature displacements and scale variations caused by terrain relief and sensor geometry. The result is a combination of the image characteristics of an aerial photograph or satellite image and the geometric qualities of a map. These attributes allow users to: *Measure distance *Calculate areas *Determine shapes of features *Calculate directions *Determine accurate coordinates *Determine land cover and use *Perform change detection *Update maps The standard digital orthoimage is a 1-m or finer resolution, natural color or color infra-red product. Most are now produced as GeoTIFFs and accompanied by a Federal Geographic Data Committee (FGDC)-compliant metadata file. The primary source for 1-m data is the National Agriculture Imagery Program (NAIP) leaf-on imagery. The U.S. Geological Survey (USGS) utilizes NAIP imagery as the image layer on its 'Digital- Map' - a new generation of USGS topographic maps (http://nationalmap.gov/digital_map). However, many Federal, State, and local governments and organizations require finer resolutions to meet a myriad of needs. Most of these images are leaf-off, natural-color products at resolutions of 1-foot (ft) or finer.
NASA Astrophysics Data System (ADS)
Northup, E. A.; Kusterer, J.; Quam, B.; Chen, G.; Early, A. B.; Beach, A. L., III
2015-12-01
The current ICARTT file format standards were developed for the purpose of fulfilling the data management needs for the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004. The goal of the ICARTT file format was to establish a common and simple to use data file format to promote data exchange and collaboration among science teams with similar science objectives. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Despite its level of acceptance, there are a number of issues with the current ICARTT format, especially concerning the machine readability. To enhance usability, the ICARTT Refresh Earth Science Data Systems Working Group (ESDSWG) was established to enable a platform for atmospheric science data producers, users (e.g. modelers) and data managers to collaborate on developing criteria for this file format. Ultimately, this is a cross agency effort to improve and aggregate the metadata records being produced. After conducting a survey to identify deficiencies in the current format, we determined which are considered most important to the various communities. Numerous recommendations were made to improve upon the file format while maintaining backward compatibility. The recommendations made to date and their advantages and limitations will be discussed.
NASA Standard for Airborne Data: ICARTT Format ESDS-RFC-019
NASA Astrophysics Data System (ADS)
Thornhill, A.; Brown, C.; Aknan, A.; Crawford, J. H.; Chen, G.; Williams, E. J.
2011-12-01
Airborne field studies generate a plethora of data products in the effort to study atmospheric composition and processes. Data file formats for airborne field campaigns are designed to present data in an understandable and organized way to support collaboration and to document relevant and important meta data. The ICARTT file format was created to facilitate data management during the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) campaign in 2004 that involved government-agencies and university participants from five countries. Since this mission the ICARTT format has been used in subsequent field campaigns such as Polar Study Using Aircraft Remote Sensing, Surface Measurements and Models of Climates, Chemistry, Aerosols, and Transport (POLARCAT) and the first phase of Deriving Information on Surface Conditions from COlumn and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ). The ICARTT file format has been endorsed as a standard format for airborne data by the Standard Process Group (SPG), one of the Earth Science Data Systems Working Groups (ESDSWG) in 2010. The detailed description of the ICARTT format can be found at http://www-air.larc.nasa.gov/missions/etc/ESDS-RFC-019-v1.00.pdf. The ICARTT data format is an ASCII, comma delimited format that was based on the NASA Ames and GTE file formats. The file header is detailed enough to fully describe the data for users outside of the instrument group and includes a description of the meta data. The ICARTT scanning tools, format structure, implementations, and examples will be presented.
In addition to standard HTML webpages, our website contains files in other formats. You may need additional software or browser plug-ins to view some of these files. The following list shows each format along with links to the corresponding freely available plug-ins or viewers. Documents Adobe Acrobat Reader (.pdf)
Dependency Tree Annotation Software
2015-11-01
formats, and it provides numerous options for customizing how dependency trees are displayed. Built entirely in Java , it can run on a wide range of...tree can be saved as an image, .mxe (a mxGraph editing file), a .conll file, and several other file formats. DTE uses the open source Java version
Representation of thermal infrared imaging data in the DICOM using XML configuration files.
Ruminski, Jacek
2007-01-01
The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.
Transported Geothermal Energy Technoeconomic Screening Tool - Calculation Engine
Liu, Xiaobing
2016-09-21
This calculation engine estimates technoeconomic feasibility for transported geothermal energy projects. The TGE screening tool (geotool.exe) takes input from input file (input.txt), and list results into output file (output.txt). Both the input and ouput files are in the same folder as the geotool.exe. To use the tool, the input file containing adequate information of the case should be prepared in the format explained below, and the input file should be put into the same folder as geotool.exe. Then the geotool.exe can be executed, which will generate a output.txt file in the same folder containing all key calculation results. The format and content of the output file is explained below as well.
Tularosa Basin Play Fairway Analysis: Strain Analysis
Adam Brandt
2015-11-15
A DEM of the Tularosa Basin was divided into twelve zones, each of which a ZR ratio was calculated for. This submission has a TIFF image of the zoning designations, along with a table with respective ZR ratio calculations in the metadata.
77 FR 23382 - Airworthiness Directives; Sikorsky Aircraft Corporation Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-19
... prior to that time. (e) Required Actions Within 90 days: (1) By making pen and ink changes, insert into... depicted in the circled area of Figure 1 of this AD. [GRAPHIC] [TIFF OMITTED] TR19AP12.000 (f) Alternative...
77 FR 42971 - Airworthiness Directives; Various Restricted Category Helicopters
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-23
... specifies replacing any mast with a crack, pitting, or corrosion beyond surface rust that is removed with a... rust with a wire brush or steel wool. [GRAPHIC] [TIFF OMITTED] TR23JY12.003 (2) If there is a crack...
Mahesh, MC; Bhandary, Shreetha
2017-01-01
Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). Conclusion There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels. PMID:28274036
Devale, Madhuri R; Mahesh, M C; Bhandary, Shreetha
2017-01-01
Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files significantly more number of cracks were seen at WL than WL-1 (p=0.057 for RaCe files and p=0.087 for K3 files). There was no statistical significance between stainless steel hand files and rotary files in terms of crack formation. Instrumentation length had a significant effect on the formation of cracks when rotary files were used. Using rotary instruments 1 mm short of apical foramen caused lesser crack formation. But, there was no statistically significant difference in number of cracks formed with hand files at two instrumentation levels.
15 CFR 995.26 - Conversion of NOAA ENC ® files to other formats.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Conversion of NOAA ENC files to other formats—(1) Content. CEVAD may provide NOAA ENC data in forms other... data files without degradation to positional accuracy or informational content. (2) Software certification. Conversion of NOAA ENC data to other formats must be accomplished within the constraints of IHO...
Early Detection | Division of Cancer Prevention
[[{"fid":"171","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Early Detection Research Group Homepage Logo","field_folder[und]":"15"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Early
Image Size Variation Influence on Corrupted and Non-viewable BMP Image
NASA Astrophysics Data System (ADS)
Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah
2017-08-01
Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingargiola, A.; Laurence, T. A.; Boutelle, R.
We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less
UFO (UnFold Operator) default data format
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kissel, L.; Biggs, F.; Marking, T.R.
The default format for the storage of x,y data for use with the UFO code is described. The format assumes that the data stored in a file is a matrix of values; two columns of this matrix are selected to define a function of the form y = f(x). This format is specifically designed to allow for easy importation of data obtained from other sources, or easy entry of data using a text editor, with a minimum of reformatting. This format is flexible and extensible through the use of inline directives stored in the optional header of the file. Amore » special extension of the format implements encoded data which significantly reduces the storage required as compared wth the unencoded form. UFO supports several extensions to the file specification that implement execute-time operations, such as, transformation of the x and/or y values, selection of specific columns of the matrix for association with the x and y values, input of data directly from other formats (e.g., DAMP and PFF), and a simple type of library-structured file format. Several examples of the use of the format are given.« less
McDonald, Daniel; Clemente, Jose C; Kuczynski, Justin; Rideout, Jai Ram; Stombaugh, Jesse; Wendel, Doug; Wilke, Andreas; Huse, Susan; Hufnagle, John; Meyer, Folker; Knight, Rob; Caporaso, J Gregory
2012-07-12
We present the Biological Observation Matrix (BIOM, pronounced "biome") format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the "ome-ome") grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. The BIOM file format and the biom-format project are steps toward reducing the "bioinformatics bottleneck" that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-26
... applications or print-to-PDF format, and not in a scanned format, at http://www.ferc.gov/docs-filing/efiling....3d 1342 (DC Cir. 2009). \\5\\ Mandatory Reliability Standards for the Bulk-Power System, Order No. 693... applications or print-to-PDF format and not in a scanned format. Commenters filing electronically do not need...
NetpathXL - An Excel Interface to the Program NETPATH
Parkhurst, David L.; Charlton, Scott R.
2008-01-01
NetpathXL is a revised version of NETPATH that runs under Windows? operating systems. NETPATH is a computer program that uses inverse geochemical modeling techniques to calculate net geochemical reactions that can account for changes in water composition between initial and final evolutionary waters in hydrologic systems. The inverse models also can account for the isotopic composition of waters and can be used to estimate radiocarbon ages of dissolved carbon in ground water. NETPATH relies on an auxiliary, database program, DB, to enter the chemical analyses and to perform speciation calculations that define total concentrations of elements, charge balance, and redox state of aqueous solutions that are then used in inverse modeling. Instead of DB, NetpathXL relies on Microsoft Excel? to enter the chemical analyses. The speciation calculation formerly included in DB is implemented within the program NetpathXL. A program DBXL can be used to translate files from the old DB format (.lon files) to NetpathXL spreadsheets, or to create new NetpathXL spreadsheets. Once users have a NetpathXL spreadsheet with the proper format, new spreadsheets can be generated by copying or saving NetpathXL spreadsheets. In addition, DBXL can convert NetpathXL spreadsheets to PHREEQC input files. New capabilities in PHREEQC (version 2.15) allow solution compositions to be written to a .lon file, and inverse models developed in PHREEQC to be written as NetpathXL .pat and model files. NetpathXL can open NetpathXL spreadsheets, NETPATH-format path files (.pat files), and NetpathXL-format path files (.pat files). Once the speciation calculations have been performed on a spreadsheet file or a .pat file has been opened, the NetpathXL calculation engine is identical to the original NETPATH. Development of models and viewing results in NetpathXL rely on keyboard entry as in NETPATH.
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2010 CFR
2010-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2013 CFR
2013-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2012 CFR
2012-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2014 CFR
2014-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
17 CFR 232.202 - Continuing hardship exemption.
Code of Federal Regulations, 2011 CFR
2011-04-01
... electronic format or post the Interactive Data File on its corporate Web site, as applicable, on the required... Interactive Data File, the electronic filer need not post on its Web site any statement with regard to the... submitted in electronic format or, in the case of an Interactive Data File (§ 232.11), to be posted on the...
Data Science Bowl Launched to Improve Lung Cancer Screening | Division of Cancer Prevention
[[{"fid":"2078","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl Logo","field_file_image_title_text[und][0][value]":"Data Science Bowl Logo","field_folder[und]":"76"},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Data Science Bowl
Code of Federal Regulations, 2014 CFR
2014-04-01
... submit a public version of a database in pdf format. The public version of the database must be publicly... interested party that files with the Department a request for an expedited antidumping review, an..., whichever is later. If the interested party that files the request is unable to locate a particular exporter...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2010 CFR
2010-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2010-10-01 2010-10-01 false What are IBFS file numbers? 1.10008 Section 1...
47 CFR 1.10008 - What are IBFS file numbers?
Code of Federal Regulations, 2011 CFR
2011-10-01
... Bureau Filing System § 1.10008 What are IBFS file numbers? (a) We assign file numbers to electronic... information, see The International Bureau Filing System File Number Format Public Notice, DA-04-568 (released... 47 Telecommunication 1 2011-10-01 2011-10-01 false What are IBFS file numbers? 1.10008 Section 1...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-22
... print-to-PDF format and not in a scanned format. Mail/Hand Delivery: Commenters unable to file comments.... FERC, 564 F.3d 1342 (DC Cir. 2009). 3. In March 2007, the Commission issued Order No. 693, evaluating... should be filed in native applications or print-to-PDF format and not in a scanned format. Commenters...
Code of Federal Regulations, 2010 CFR
2010-10-01
... recording under § 67.200 may be submitted in portable document format (.pdf) as an attachment to electronic... submitted for filing in .pdf format pertains to a vessel that is not a currently documented vessel, a... with the National Vessel Documentation Center or must be submitted in .pdf format with the instrument...
'Lyell' Panorama inside Victoria Crater (False Color)
NASA Technical Reports Server (NTRS)
2008-01-01
During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay. Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.' This view combines many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). Images taken through Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers were mixed to produce this view, which is presented in a false-color stretch to bring out subtle color differences in the scene. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera. Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-23
... result of management measures designed to meet the Pacific Coast Groundfish FMP objective of achieving..., subpart G, are revised to read as follows: BILLING CODE 3510-22-P [GRAPHIC] [TIFF OMITTED] TR23AU10.046 [[Page 51687
Improving Access to Precipitation Data for GIS Users: Designing for Ease of Use
NASA Technical Reports Server (NTRS)
Stocker, Erich F.; Kelley, Owen A.
2007-01-01
The Global Precipitation Measurement Mission (GPM) is a NASA/JAXA led international mission to configure a constellation of space-based radiometers to monitor precipitation over the globe. The GPM goal of making global 3-hour precipitation products available in near real-time will make such global products more useful to a broader community of modelers and Geographic Information Systems (GIS) users than is currently the case with remote sensed precipitation products. Based on the existing interest to make Tropical Rainfall Measuring Mission (TRMM) data available to a growing community of GIS users as well as what will certainly be an expanded community during the GPM era, it is clear that data systems must make a greater effort to provide data in formats easily used by GIS. We describe precipitation GIS products being developed for TRMM data. These products will serve as prototypes for production efforts during the GPM era. We describe efforts to convert TRMM precipitation data to GeoTIFF, Shapefile, and ASCII grid. Clearly, our goal is to format GPM data so that it can be easily used within GIS applications. We desire feedback on these efforts and any additions or direction changes that should be undertaken by the data system.
NASA Earth Observations (NEO): Data Imagery for Education and Visualization
NASA Astrophysics Data System (ADS)
Ward, K.
2008-12-01
NASA Earth Observations (NEO) has dramatically simplified public access to georeferenced imagery of NASA remote sensing data. NEO targets the non-traditional data users who are currently underserved by functionality and formats available from the existing data ordering systems. These users include formal and informal educators, museum and science center personnel, professional communicators, and citizen scientists. NEO currently serves imagery from 45 different datasets with daily, weekly, and/or monthly temporal resolutions, with more datasets currently under development. The imagery from these datasets is produced in coordination with several data partners who are affiliated either with the instrument science teams or with the respective data processing center. NEO is a system of three components -- website, WMS (Web Mapping Service), and ftp archive -- which together are able to meet the wide-ranging needs of our users. Some of these needs include the ability to: view and manipulate imagery using the NEO website -- e.g., applying color palettes, resizing, exporting to a variety of formats including PNG, JPEG, KMZ (Google Earth), GeoTIFF; access the NEO collection via a standards-based API (WMS); and create customized exports for select users (ftp archive) such as Science on a Sphere, NASA's Earth Observatory, and others.
NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention
[[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of
Pancreatic Cancer Detection Consortium (PCDC) | Division of Cancer Prevention
[[{"fid":"2256","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso highlighting the pancreas.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"A 3-dimensional image of a human torso
Reprocessing of multi-channel seismic-reflection data collected in the Beaufort Sea
Agena, W.F.; Lee, Myung W.; Hart, P.E.
2000-01-01
Contained on this set of two CD-ROMs are stacked and migrated multi-channel seismic-reflection data for 65 lines recorded in the Beaufort Sea by the United States Geological Survey in 1977. All data were reprocessed by the USGS using updated processing methods resulting in improved interpretability. Each of the two CD-ROMs contains the following files: 1) 65 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 65 lines in standard SEG-P1 format; 3) an ASCII text file with cross-reference information for relating the sequential trace numbers on each line to cdp numbers and shotpoint numbers; 4) 2 small scale graphic images (stacked and migrated) of a segment of line 722 in Adobe Acrobat (R) PDF format; 5) a graphic image of the location map, generated from the navigation file; 6) PlotSeis, an MS-DOS Application that allows PC users to interactively view the SEG-Y files; 7) a PlotSeis documentation file; and 8) an explanation of the processing used to create the final seismic sections (this document).
Manoukis, Nicholas C
2007-07-01
There has been a great increase in both the number of population genetic analysis programs and the size of data sets being studied with them. Since the file formats required by the most popular and useful programs are variable, automated reformatting or conversion between them is desirable. formatomatic is an easy to use program that can read allelic data files in genepop, raw (csv) or convert formats and create data files in nine formats: raw (csv), arlequin, genepop, immanc/bayesass +, migrate, newhybrids, msvar, baps and structure. Use of formatomatic should greatly reduce time spent reformatting data sets and avoid unnecessary errors.
File formats commonly used in mass spectrometry proteomics.
Deutsch, Eric W
2012-12-01
The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorokine, Alexandre
2011-10-01
Simple Ontology Format (SOFT) library and file format specification provides a set of simple tools for developing and maintaining ontologies. The library, implemented as a perl module, supports parsing and verification of the files in SOFt format, operations with ontologies (adding, removing, or filtering of entities), and converting of ontologies into other formats. SOFT allows users to quickly create ontologies using only a basic text editor, verify it, and portray it in a graph layout system using customized styles.
2012-01-01
Background We present the Biological Observation Matrix (BIOM, pronounced “biome”) format: a JSON-based file format for representing arbitrary observation by sample contingency tables with associated sample and observation metadata. As the number of categories of comparative omics data types (collectively, the “ome-ome”) grows rapidly, a general format to represent and archive this data will facilitate the interoperability of existing bioinformatics tools and future meta-analyses. Findings The BIOM file format is supported by an independent open-source software project (the biom-format project), which initially contains Python objects that support the use and manipulation of BIOM data in Python programs, and is intended to be an open development effort where developers can submit implementations of these objects in other programming languages. Conclusions The BIOM file format and the biom-format project are steps toward reducing the “bioinformatics bottleneck” that is currently being experienced in diverse areas of biological sciences, and will help us move toward the next phase of comparative omics where basic science is translated into clinical and environmental applications. The BIOM file format is currently recognized as an Earth Microbiome Project Standard, and as a Candidate Standard by the Genomic Standards Consortium. PMID:23587224
76 FR 47606 - Sport Fishing and Boating Partnership Council
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... the following formats: One hard copy with original signature, and one electronic copy via e- mail (acceptable file formats are Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or rich text file...
Measles, Mumps, and Rubella (MMR) Vaccination: What Everyone Should Know
... rubella combination vaccine Measles=Rubeola Measles=”10-day”, “hard” and “red” measles MMRV=measles, mumps, rubella, and varicella combination vaccine File Formats Help: How do I view different file formats ( ...
78 FR 19152 - Revisions to Modeling, Data, and Analysis Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
... processing software should be filed in native applications or print-to-PDF format and not in a scanned format...,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). 3. In March 2007, the... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...
76 FR 75898 - Sport Fishing and Boating Partnership Council
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... following formats: One hard copy with original signature, and one electronic copy via email (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Please submit your statement to Douglas Hobbs, Council Coordinator (see FOR FURTHER...
14 CFR 221.195 - Requirement for filing printed material.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.195 Requirement for filing printed material. (a) Any tariff, or revision thereto, filed in paper format which accompanies....190(b). Further, such paper tariff, or revision thereto, shall be filed in accordance with the...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
18 CFR 35.7 - Electronic filing requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Electronic filing... § 35.7 Electronic filing requirements. (a) General rule. All filings made in proceedings initiated... declarations or statements and electronic signatures. (c) Format requirements for electronic filing. The...
NIMBUS 7 Earth Radiation Budget (ERB) Matrix User's Guide. Volume 2: Tape Specifications
NASA Technical Reports Server (NTRS)
Ray, S. N.; Vasanth, K. L.
1984-01-01
The ERB MATRIX tape is generated by an IBM 3081 computer program and is a 9 track, 1600 BPI tape. The gross format of the tape given on Page 1, shows an initial standard header file followed by data files. The standard header file contains two standard header records. A trailing documentation file (TDF) is the last file on the tape. Pages 9 through 17 describe, in detail, the standard header file and the TDF. The data files contain data for 37 different ERB parameters. Each file has data based on either a daily, 6 day cyclic, or monthly time interval. There are three types of physical records in the data files; namely, the world grid physical record, the documentation mercator/polar map projection physical record, and the monthly calibration physical record. The manner in which the data for the 37 ERB parameters are stored in the physical records comprising the data files, is given in the gross format section.
Extracting the Data From the LCM vk4 Formatted Output File
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
These are slides about extracting the data from the LCM vk4 formatted output file. The following is covered: vk4 file produced by Keyence VK Software, custom analysis, no off the shelf way to read the file, reading the binary data in a vk4 file, various offsets in decimal lines, finding the height image data, directly in MATLAB, binary output beginning of height image data, color image information, color image binary data, color image decimal and binary data, MATLAB code to read vk4 file (choose a file, read the file, compute offsets, read optical image, laser optical image, read and computemore » laser intensity image, read height image, timing, display height image, display laser intensity image, display RGB laser optical images, display RGB optical images, display beginning data and save images to workspace, gamma correction subroutine), reading intensity form the vk4 file, linear in the low range, linear in the high range, gamma correction for vk4 files, computing the gamma intensity correction, observations.« less
NASA Astrophysics Data System (ADS)
Gordov, Evgeny; Okladnikov, Igor; Titov, Alexander
2017-04-01
For comprehensive usage of large geospatial meteorological and climate datasets it is necessary to create a distributed software infrastructure based on the spatial data infrastructure (SDI) approach. Currently, it is generally accepted that the development of client applications as integrated elements of such infrastructure should be based on the usage of modern web and GIS technologies. The paper describes the Web GIS for complex processing and visualization of geospatial (mainly in NetCDF and PostGIS formats) datasets as an integral part of the dedicated Virtual Research Environment for comprehensive study of ongoing and possible future climate change, and analysis of their implications, providing full information and computing support for the study of economic, political and social consequences of global climate change at the global and regional levels. The Web GIS consists of two basic software parts: 1. Server-side part representing PHP applications of the SDI geoportal and realizing the functionality of interaction with computational core backend, WMS/WFS/WPS cartographical services, as well as implementing an open API for browser-based client software. Being the secondary one, this part provides a limited set of procedures accessible via standard HTTP interface. 2. Front-end part representing Web GIS client developed according to a "single page application" technology based on JavaScript libraries OpenLayers (http://openlayers.org/), ExtJS (https://www.sencha.com/products/extjs), GeoExt (http://geoext.org/). It implements application business logic and provides intuitive user interface similar to the interface of such popular desktop GIS applications, as uDIG, QuantumGIS etc. Boundless/OpenGeo architecture was used as a basis for Web-GIS client development. According to general INSPIRE requirements to data visualization Web GIS provides such standard functionality as data overview, image navigation, scrolling, scaling and graphical overlay, displaying map legends and corresponding metadata information. The specialized Web GIS client contains three basic tires: • Tier of NetCDF metadata in JSON format • Middleware tier of JavaScript objects implementing methods to work with: o NetCDF metadata o XML file of selected calculations configuration (XML task) o WMS/WFS/WPS cartographical services • Graphical user interface tier representing JavaScript objects realizing general application business logic Web-GIS developed provides computational processing services launching to support solving tasks in the area of environmental monitoring, as well as presenting calculation results in the form of WMS/WFS cartographical layers in raster (PNG, JPG, GeoTIFF), vector (KML, GML, Shape), and binary (NetCDF) formats. It has shown its effectiveness in the process of solving real climate change research problems and disseminating investigation results in cartographical formats. The work is supported by the Russian Science Foundation grant No 16-19-10257.
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E.; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data. PMID:29706879
Dragly, Svenn-Arne; Hobbi Mobarhan, Milad; Lepperød, Mikkel E; Tennøe, Simen; Fyhn, Marianne; Hafting, Torkel; Malthe-Sørenssen, Anders
2018-01-01
Natural sciences generate an increasing amount of data in a wide range of formats developed by different research groups and commercial companies. At the same time there is a growing desire to share data along with publications in order to enable reproducible research. Open formats have publicly available specifications which facilitate data sharing and reproducible research. Hierarchical Data Format 5 (HDF5) is a popular open format widely used in neuroscience, often as a foundation for other, more specialized formats. However, drawbacks related to HDF5's complex specification have initiated a discussion for an improved replacement. We propose a novel alternative, the Experimental Directory Structure (Exdir), an open specification for data storage in experimental pipelines which amends drawbacks associated with HDF5 while retaining its advantages. HDF5 stores data and metadata in a hierarchy within a complex binary file which, among other things, is not human-readable, not optimal for version control systems, and lacks support for easy access to raw data from external applications. Exdir, on the other hand, uses file system directories to represent the hierarchy, with metadata stored in human-readable YAML files, datasets stored in binary NumPy files, and raw data stored directly in subdirectories. Furthermore, storing data in multiple files makes it easier to track for version control systems. Exdir is not a file format in itself, but a specification for organizing files in a directory structure. Exdir uses the same abstractions as HDF5 and is compatible with the HDF5 Abstract Data Model. Several research groups are already using data stored in a directory hierarchy as an alternative to HDF5, but no common standard exists. This complicates and limits the opportunity for data sharing and development of common tools for reading, writing, and analyzing data. Exdir facilitates improved data storage, data sharing, reproducible research, and novel insight from interdisciplinary collaboration. With the publication of Exdir, we invite the scientific community to join the development to create an open specification that will serve as many needs as possible and as a foundation for open access to and exchange of data.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-28
.... Since the publication of that rule, it has come to DOE's attention that, due to a technical oversight, a...-percent confidence limit of the true mean (X L ) divided by 0.97, i.e., [GRAPHIC] [TIFF OMITTED] TR28AP10...
Five Tips to Help Prevent Infections
... Information For… Media Policy Makers 5 Tips to Help Prevent Infections Language: English (US) Español (Spanish) Recommend ... Makers Language: English (US) Español (Spanish) File Formats Help: How do I view different file formats (PDF, ...
ISA-TAB-Nano: a specification for sharing nanomaterial research data in spreadsheet-based format.
Thomas, Dennis G; Gaheen, Sharon; Harper, Stacey L; Fritts, Martin; Klaessig, Fred; Hahn-Dantona, Elizabeth; Paik, David; Pan, Sue; Stafford, Grace A; Freund, Elaine T; Klemm, Juli D; Baker, Nathan A
2013-01-14
The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption.
ISA-TAB-Nano: A Specification for Sharing Nanomaterial Research Data in Spreadsheet-based Format
2013-01-01
Background and motivation The high-throughput genomics communities have been successfully using standardized spreadsheet-based formats to capture and share data within labs and among public repositories. The nanomedicine community has yet to adopt similar standards to share the diverse and multi-dimensional types of data (including metadata) pertaining to the description and characterization of nanomaterials. Owing to the lack of standardization in representing and sharing nanomaterial data, most of the data currently shared via publications and data resources are incomplete, poorly-integrated, and not suitable for meaningful interpretation and re-use of the data. Specifically, in its current state, data cannot be effectively utilized for the development of predictive models that will inform the rational design of nanomaterials. Results We have developed a specification called ISA-TAB-Nano, which comprises four spreadsheet-based file formats for representing and integrating various types of nanomaterial data. Three file formats (Investigation, Study, and Assay files) have been adapted from the established ISA-TAB specification; while the Material file format was developed de novo to more readily describe the complexity of nanomaterials and associated small molecules. In this paper, we have discussed the main features of each file format and how to use them for sharing nanomaterial descriptions and assay metadata. Conclusion The ISA-TAB-Nano file formats provide a general and flexible framework to record and integrate nanomaterial descriptions, assay data (metadata and endpoint measurements) and protocol information. Like ISA-TAB, ISA-TAB-Nano supports the use of ontology terms to promote standardized descriptions and to facilitate search and integration of the data. The ISA-TAB-Nano specification has been submitted as an ASTM work item to obtain community feedback and to provide a nanotechnology data-sharing standard for public development and adoption. PMID:23311978
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E-FILING link on the Board's Web site....S.C. 554(e). DRGHF requests that the Board issue an order declaring that municipal zoning law is...
Johnsen Lind, Andreas; Helge Johnsen, Bjorn; Hill, Labarron K; Sollers Iii, John J; Thayer, Julian F
2011-01-01
The aim of the present manuscript is to present a user-friendly and flexible platform for transforming Kubios HRV output files to an .xls-file format, used by MS Excel. The program utilizes either native or bundled Java and is platform-independent and mobile. This means that it can run without being installed on a computer. It also has an option of continuous transferring of data indicating that it can run in the background while Kubios produces output files. The program checks for changes in the file structure and automatically updates the .xls- output file.
5 CFR 1201.14 - Electronic filing procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal..., or by uploading the supporting documents in the form of one or more PDF files in which each...
C2x: A tool for visualisation and input preparation for CASTEP and other electronic structure codes
NASA Astrophysics Data System (ADS)
Rutter, M. J.
2018-04-01
The c2x code fills two distinct roles. Its first role is in acting as a converter between the binary format .check files from the widely-used CASTEP [1] electronic structure code and various visualisation programs. Its second role is to manipulate and analyse the input and output files from a variety of electronic structure codes, including CASTEP, ONETEP and VASP, as well as the widely-used 'Gaussian cube' file format. Analysis includes symmetry analysis, and manipulation arbitrary cell transformations. It continues to be under development, with growing functionality, and is written in a form which would make it easy to extend it to working directly with files from other electronic structure codes. Data which c2x is capable of extracting from CASTEP's binary checkpoint files include charge densities, spin densities, wavefunctions, relaxed atomic positions, forces, the Fermi level, the total energy, and symmetry operations. It can recreate .cell input files from checkpoint files. Volumetric data can be output in formats useable by many common visualisation programs, and c2x will itself calculate integrals, expand data into supercells, and interpolate data via combinations of Fourier and trilinear interpolation. It can extract data along arbitrary lines (such as lines between atoms) as 1D output. C2x is able to convert between several common formats for describing molecules and crystals, including the .cell format of CASTEP. It can construct supercells, reduce cells to their primitive form, and add specified k-point meshes. It uses the spglib library [2] to report symmetry information, which it can add to .cell files. C2x is a command-line utility, so is readily included in scripts. It is available under the GPL and can be obtained from http://www.c2x.org.uk. It is believed to be the only open-source code which can read CASTEP's .check files, so it will have utility in other projects.
Development of web-GIS system for analysis of georeferenced geophysical data
NASA Astrophysics Data System (ADS)
Okladnikov, I.; Gordov, E. P.; Titov, A. G.; Bogomolov, V. Y.; Genina, E.; Martynova, Y.; Shulgina, T. M.
2012-12-01
Georeferenced datasets (meteorological databases, modeling and reanalysis results, remote sensing products, etc.) are currently actively used in numerous applications including modeling, interpretation and forecast of climatic and ecosystem changes for various spatial and temporal scales. Due to inherent heterogeneity of environmental datasets as well as their huge size which might constitute up to tens terabytes for a single dataset at present studies in the area of climate and environmental change require a special software support. A dedicated web-GIS information-computational system for analysis of georeferenced climatological and meteorological data has been created. The information-computational system consists of 4 basic parts: computational kernel developed using GNU Data Language (GDL), a set of PHP-controllers run within specialized web-portal, JavaScript class libraries for development of typical components of web mapping application graphical user interface (GUI) based on AJAX technology, and an archive of geophysical datasets. Computational kernel comprises of a number of dedicated modules for querying and extraction of data, mathematical and statistical data analysis, visualization, and preparing output files in geoTIFF and netCDF format containing processing results. Specialized web-portal consists of a web-server Apache, complying OGC standards Geoserver software which is used as a base for presenting cartographical information over the Web, and a set of PHP-controllers implementing web-mapping application logic and governing computational kernel. JavaScript libraries aiming at graphical user interface development are based on GeoExt library combining ExtJS Framework and OpenLayers software. The archive of geophysical data consists of a number of structured environmental datasets represented by data files in netCDF, HDF, GRIB, ESRI Shapefile formats. For processing by the system are available: two editions of NCEP/NCAR Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, ECMWF ERA-40 Reanalysis, ECMWF ERA Interim Reanalysis, MRI/JMA APHRODITE's Water Resources Project Reanalysis, DWD Global Precipitation Climatology Centre's data, GMAO Modern Era-Retrospective analysis for Research and Applications, meteorological observational data for the territory of the former USSR for the 20th century, results of modeling by global and regional climatological models, and others. The system is already involved into a scientific research process. Particularly, recently the system was successfully used for analysis of Siberia climate changes and its impact in the region. The Web-GIS information-computational system for geophysical data analysis provides specialists involved into multidisciplinary research projects with reliable and practical instruments for complex analysis of climate and ecosystems changes on global and regional scales. Using it even unskilled user without specific knowledge can perform computational processing and visualization of large meteorological, climatological and satellite monitoring datasets through unified web-interface in a common graphical web-browser. This work is partially supported by the Ministry of education and science of the Russian Federation (contract #07.514.114044), projects IV.31.1.5, IV.31.2.7, RFBR grants #10-07-00547a, #11-05-01190a, and integrated project SB RAS #131.
File Formats Commonly Used in Mass Spectrometry Proteomics*
Deutsch, Eric W.
2012-01-01
The application of mass spectrometry (MS) to the analysis of proteomes has enabled the high-throughput identification and abundance measurement of hundreds to thousands of proteins per experiment. However, the formidable informatics challenge associated with analyzing MS data has required a wide variety of data file formats to encode the complex data types associated with MS workflows. These formats encompass the encoding of input instruction for instruments, output products of the instruments, and several levels of information and results used by and produced by the informatics analysis tools. A brief overview of the most common file formats in use today is presented here, along with a discussion of related topics. PMID:22956731
75 FR 47624 - Sport Fishing and Boating Partnership Council
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... Coordinator in both of the following formats: One hard copy with original signature, and one electronic copy via e- mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). In order to attend this meeting, you must register by...
Performance regression manager for large scale systems
Faraj, Daniel A.
2017-10-17
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Methods comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputting for display an indication of a result ofmore » the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
Efficient stereoscopic contents file format on the basis of ISO base media file format
NASA Astrophysics Data System (ADS)
Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon
2009-02-01
A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.
75 FR 5066 - Commission Information Collection Activities (FERC Form 60,1
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... corresponding dockets and collection numbers.) Comments may be filed either electronically or in paper format. Those persons filing electronically do not need to make a paper filing. Documents filed electronically... acknowledgement to the sender's e- mail address upon receipt of comments. For paper filings, the comments should...
NASA Astrophysics Data System (ADS)
Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.
2017-12-01
Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.
Leveraging Metadata to Create Interactive Images... Today!
NASA Astrophysics Data System (ADS)
Hurt, Robert L.; Squires, G. K.; Llamas, J.; Rosenthal, C.; Brinkworth, C.; Fay, J.
2011-01-01
The image gallery for NASA's Spitzer Space Telescope has been newly rebuilt to fully support the Astronomy Visualization Metadata (AVM) standard to create a new user experience both on the website and in other applications. We encapsulate all the key descriptive information for a public image, including color representations and astronomical and sky coordinates and make it accessible in a user-friendly form on the website, but also embed the same metadata within the image files themselves. Thus, images downloaded from the site will carry with them all their descriptive information. Real-world benefits include display of general metadata when such images are imported into image editing software (e.g. Photoshop) or image catalog software (e.g. iPhoto). More advanced support in Microsoft's WorldWide Telescope can open a tagged image after it has been downloaded and display it in its correct sky position, allowing comparison with observations from other observatories. An increasing number of software developers are implementing AVM support in applications and an online image archive for tagged images is under development at the Spitzer Science Center. Tagging images following the AVM offers ever-increasing benefits to public-friendly imagery in all its standard forms (JPEG, TIFF, PNG). The AVM standard is one part of the Virtual Astronomy Multimedia Project (VAMP); http://www.communicatingastronomy.org
Predicted seafloor facies of Central Santa Monica Bay, California
Dartnell, Peter; Gardner, James V.
2004-01-01
Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.
Visualization of the Construction of Ancient Roman Buildings in Ostia Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Hori, Y.; Ogawa, T.
2017-02-01
The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we "skipped" many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.
NASA-IGES Translator and Viewer
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Logan, Michael A.
1995-01-01
NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.
Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y
2000-03-01
To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.
77 FR 60138 - Trinity Adaptive Management Working Group; Public Teleconference/Web-Based Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-02
... statements must be supplied to Elizabeth Hadley in one of the following formats: One hard copy with original... file formats are Adobe Acrobat PDF, MS Word, PowerPoint, or rich text file). Registered speakers who...
E-submission chronic toxicology study supplemental files
The formats and instructions in these documents are designed to be used as an example or guide for registrants to format electronic files for submission of animal toxicology data to OPP for review in support of registration and reevaluation of pesticides.
Bradley, Anthony R; Rose, Alexander S; Pavelka, Antonín; Valasatava, Yana; Duarte, Jose M; Prlić, Andreas; Rose, Peter W
2017-06-01
Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis.
Pavelka, Antonín; Valasatava, Yana; Prlić, Andreas
2017-01-01
Recent advances in experimental techniques have led to a rapid growth in complexity, size, and number of macromolecular structures that are made available through the Protein Data Bank. This creates a challenge for macromolecular visualization and analysis. Macromolecular structure files, such as PDB or PDBx/mmCIF files can be slow to transfer, parse, and hard to incorporate into third-party software tools. Here, we present a new binary and compressed data representation, the MacroMolecular Transmission Format, MMTF, as well as software implementations in several languages that have been developed around it, which address these issues. We describe the new format and its APIs and demonstrate that it is several times faster to parse, and about a quarter of the file size of the current standard format, PDBx/mmCIF. As a consequence of the new data representation, it is now possible to visualize structures with millions of atoms in a web browser, keep the whole PDB archive in memory or parse it within few minutes on average computers, which opens up a new way of thinking how to design and implement efficient algorithms in structural bioinformatics. The PDB archive is available in MMTF file format through web services and data that are updated on a weekly basis. PMID:28574982
ChemEngine: harvesting 3D chemical structures of supplementary data from PDF files.
Karthikeyan, Muthukumarasamy; Vyas, Renu
2016-01-01
Digital access to chemical journals resulted in a vast array of molecular information that is now available in the supplementary material files in PDF format. However, extracting this molecular information, generally from a PDF document format is a daunting task. Here we present an approach to harvest 3D molecular data from the supporting information of scientific research articles that are normally available from publisher's resources. In order to demonstrate the feasibility of extracting truly computable molecules from PDF file formats in a fast and efficient manner, we have developed a Java based application, namely ChemEngine. This program recognizes textual patterns from the supplementary data and generates standard molecular structure data (bond matrix, atomic coordinates) that can be subjected to a multitude of computational processes automatically. The methodology has been demonstrated via several case studies on different formats of coordinates data stored in supplementary information files, wherein ChemEngine selectively harvested the atomic coordinates and interpreted them as molecules with high accuracy. The reusability of extracted molecular coordinate data was demonstrated by computing Single Point Energies that were in close agreement with the original computed data provided with the articles. It is envisaged that the methodology will enable large scale conversion of molecular information from supplementary files available in the PDF format into a collection of ready- to- compute molecular data to create an automated workflow for advanced computational processes. Software along with source codes and instructions available at https://sourceforge.net/projects/chemengine/files/?source=navbar.Graphical abstract.
Network Configuration Analysis for Formation Flying Satellites
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Wallett, Thomas M.; Konangi, Vijay K.; Bhasin, Kul B.
2001-01-01
The performance of two networks to support autonomous multi-spacecraft formation flying systems is presented. Both systems are comprised of a ten-satellite formation, with one of the satellites designated as the central or 'mother ship.' All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/EP over ATM protocol architecture within the formation, and the second system uses the IEEE 802.11 protocol architecture within the formation. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IP queuing delay, IP queue size and IP processing delay at the mother ship as well as end-to-end delay for both systems. In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.
NASA Astrophysics Data System (ADS)
Foster, K.
1994-09-01
This document is a description of a computer program called Format( )MEDIC( )Input. The purpose of this program is to allow the user to quickly reformat wind velocity data in the Model Evaluation Database (MEDb) into a reasonable 'first cut' set of MEDIC input files (MEDIC.nml, StnLoc.Met, and Observ.Met). The user is cautioned that these resulting input files must be reviewed for correctness and completeness. This program will not format MEDb data into a Problem Station Library or Problem Metdata File. A description of how the program reformats the data is provided, along with a description of the required and optional user input and a description of the resulting output files. A description of the MEDb is not provided here but can be found in the RAS Division Model Evaluation Database Description document.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-07
... can file your comments electronically using the eFiling feature located on the Commission's Web site ( www.ferc.gov ) under the Documents & Filings link. With eFiling, you can provide comments in a variety of formats by attaching them as a file with your submission. New eFiling users must first create an...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-03
... meeting. Written statements should be supplied to the DFO in the following formats: One hard copy with original signature and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, MS Word, WordPerfect, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... their consideration. Written statements should be supplied to the DFO in the following formats: one hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/ Windows 98/2000/XP format...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... statements should be supplied to the DFO in the following formats: one hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are asked to provide...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-30
... should be supplied to the DFO in the following formats: One hard copy with original signature and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, MS Word, WordPerfect, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are asked to provide electronic...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-21
... be supplied to the DFO in the following formats: One hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/ Windows 98/2000/XP format). Submitters are requested to provide two...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... supplied to the DFO in the following formats: One hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/ Windows 98/2000/XP format). Submitters are requested to provide two versions...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-11
... supplied to the DFO in the following formats: one hard copy with original signature and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, MS Word, WordPerfect, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are asked to provide versions of each...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... statements should be supplied to the DFO in the following formats: One hard copy with original signature and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are requested to provide...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-08
.... Written statements should be supplied to the DFO in the following formats: one hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/Windows 98/2000/XP format). Submitters are asked to...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-01
... their consideration. Written statements should be supplied to the DFO in the following formats: one hard copy with original signature, and one electronic copy via e-mail (acceptable file format: Adobe Acrobat PDF, WordPerfect, MS Word, MS PowerPoint, or Rich Text files in IBM-PC/ Windows 98/2000/XP format...
Miller, John J.; Agena, W.F.; Lee, M.W.; Zihlman, F.N.; Grow, J.A.; Taylor, D.J.; Killgore, Michele; Oliver, H.L.
2000-01-01
This CD-ROM contains stacked, migrated, 2-Dimensional seismic reflection data and associated support information for 22 regional seismic lines (3,470 line-miles) recorded in the National Petroleum Reserve ? Alaska (NPRA) from 1974 through 1981. Together, these lines constitute about one-quarter of the seismic data collected as part of the Federal Government?s program to evaluate the petroleum potential of the Reserve. The regional lines, which form a grid covering the entire NPRA, were created by combining various individual lines recorded in different years using different recording parameters. These data were reprocessed by the USGS using modern, post-stack processing techniques, to create a data set suitable for interpretation on interactive seismic interpretation computer workstations. Reprocessing was done in support of ongoing petroleum resource studies by the USGS Energy Program. The CD-ROM contains the following files: 1) 22 files containing the digital seismic data in standard, SEG-Y format; 2) 1 file containing navigation data for the 22 lines in standard SEG-P1 format; 3) 22 small scale graphic images of each seismic line in Adobe Acrobat? PDF format; 4) a graphic image of the location map, generated from the navigation file, with hyperlinks to the graphic images of the seismic lines; 5) an ASCII text file with cross-reference information for relating the sequential trace numbers on each regional line to the line number and shotpoint number of the original component lines; and 6) an explanation of the processing used to create the final seismic sections (this document). The SEG-Y format seismic files and SEG-P1 format navigation file contain all the information necessary for loading the data onto a seismic interpretation workstation.
17 CFR 232.14 - Paper filings not accepted without exemption.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 17 Commodity and Securities Exchanges 2 2011-04-01 2011-04-01 false Paper filings not accepted... COMMISSION REGULATION S-T-GENERAL RULES AND REGULATIONS FOR ELECTRONIC FILINGS General § 232.14 Paper filings not accepted without exemption. The Commission will not accept in paper format any filing required to...
Statistical analysis of low-voltage EDS spectrum images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, I.M.
1998-03-01
The benefits of using low ({le}5 kV) operating voltages for energy-dispersive X-ray spectrometry (EDS) of bulk specimens have been explored only during the last few years. This paper couples low-voltage EDS with two other emerging areas of characterization: spectrum imaging of a computer chip manufactured by a major semiconductor company. Data acquisition was performed with a Philips XL30-FEG SEM operated at 4 kV and equipped with an Oxford super-ATW detector and XP3 pulse processor. The specimen was normal to the electron beam and the take-off angle for acquisition was 35{degree}. The microscope was operated with a 150 {micro}m diameter finalmore » aperture at spot size 3, which yielded an X-ray count rate of {approximately}2,000 s{sup {minus}1}. EDS spectrum images were acquired as Adobe Photoshop files with the 4pi plug-in module. (The spectrum images could also be stored as NIH Image files, but the raw data are automatically rescaled as maximum-contrast (0--255) 8-bit TIFF images -- even at 16-bit resolution -- which poses an inconvenience for quantitative analysis.) The 4pi plug-in module is designed for EDS X-ray mapping and allows simultaneous acquisition of maps from 48 elements plus an SEM image. The spectrum image was acquired by re-defining the energy intervals of 48 elements to form a series of contiguous 20 eV windows from 1.25 kV to 2.19 kV. A spectrum image of 450 x 344 pixels was acquired from the specimen with a sampling density of 50 nm/pixel and a dwell time of 0.25 live seconds per pixel, for a total acquisition time of {approximately}14 h. The binary data files were imported into Mathematica for analysis with software developed by the author at Oak Ridge National Laboratory. A 400 x 300 pixel section of the original image was analyzed. MSA required {approximately}185 Mbytes of memory and {approximately}18 h of CPU time on a 300 MHz Power Macintosh 9600.« less
Atmospheric Science Data Center
2013-12-19
UAEMIAAE Aerosol product. ( File version details ) File version F07_0015 has better ... properties. File version F08_0016 has improved cloud screening procedure resulting in better aerosol optical depth. ... Coverage: August - October 2004 File Format: HDF-EOS Tools: FTP Access: Data Pool ...
Performance regression manager for large scale systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
System and computer program product to perform an operation comprising generating, based on a first output generated by a first execution instance of a command, a first output file specifying a value of at least one performance metric, wherein the first output file is formatted according to a predefined format, comparing the value of the at least one performance metric in the first output file to a value of the performance metric in a second output file, the second output file having been generated based on a second output generated by a second execution instance of the command, and outputtingmore » for display an indication of a result of the comparison of the value of the at least one performance metric of the first output file to the value of the at least one performance metric of the second output file.« less
NASA Astrophysics Data System (ADS)
Yamagishi, Y.; Yanaka, H.; Tsuboi, S.
2009-12-01
We have developed a conversion tool for the data of seismic tomography into KML, called KML generator, and made it available on the web site (http://www.jamstec.go.jp/pacific21/google_earth). The KML generator enables us to display vertical and horizontal cross sections of the model on Google Earth in three-dimensional manner, which would be useful to understand the Earth's interior. The previous generator accepts text files of grid-point data having longitude, latitude, and seismic velocity anomaly. Each data file contains the data for each depth. Metadata, such as bibliographic reference, grid-point interval, depth, are described in other information file. We did not allow users to upload their own tomographic model to the web application, because there is not standard format to represent tomographic model. Recently European seismology research project, NEIRES (Network of Research Infrastructures for European Seismology), advocates that the data of seismic tomography should be standardized. They propose a new format based on JSON (JavaScript Object Notation), which is one of the data-interchange formats, as a standard one for the tomography. This format consists of two parts, which are metadata and grid-point data values. The JSON format seems to be powerful to handle and to analyze the tomographic model, because the structure of the format is fully defined by JavaScript objects, thus the elements are directly accessible by a script. In addition, there exist JSON libraries for several programming languages. The International Federation of Digital Seismograph Network (FDSN) adapted this format as a FDSN standard format for seismic tomographic model. There might be a possibility that this format would not only be accepted by European seismologists but also be accepted as the world standard. Therefore we improve our KML generator for seismic tomography to accept the data file having also JSON format. We also improve the web application of the generator so that the JSON formatted data file can be uploaded. Users can convert any tomographic model data to KML. The KML obtained through the new generator should provide an arena to compare various tomographic models and other geophysical observations on Google Earth, which may act as a common platform for geoscience browser.
Smith, Steven M.
1997-01-01
The National Uranium Resource Evaluation (NURE) Hydrogeochemical and Stream Sediment Reconnaissance (HSSR) program produced a large amount of geochemical data. To fully understand how these data were generated, it is recommended that you read the History of NURE HSSR Program for a summary of the entire program. By the time the NURE program had ended, the HSSR data consisted of 894 separate data files stored with 47 different formats. Many files contained duplication of data found in other files. The University of Oklahoma's Information Systems Programs of the Energy Resources Institute (ISP) was contracted by the Department of Energy to enhance the accessibility and usefulness of the NURE HSSR data. ISP created a single standard-format master file to replace the 894 original files. ISP converted 817 of the 894 original files before its funding apparently ran out. The ISP-reformatted NURE data files have been released by the USGS on CD-ROM (Lower 48 States, Hoffman and Buttleman, 1994; Alaska, Hoffman and Buttleman, 1996). A description of each NURE database field, derived from a draft NURE HSSR data format manual (unpubl. commun., Stan Moll, ISP, Oct 7, 1988), was included in a readme file on each CD-ROM. That original manual was incomplete and assumed that the reformatting process had gone to completion. A lot of vital information was not included. Efforts to correct that manual and the NURE data revealed a large number of problems and missing data. As a result of the frustrating process of cleaning and re-cleaning data from the ISP-reformatted NURE files, a new NURE HSSR data format was developed. This work represents a totally new attempt to reformat the original NURE files into 2 consistent database structures; one for water samples and a second for sediment samples, on a quadrangle by quadrangle basis, from the original NURE files. Although this USGS-reformatted NURE HSSR data format is different than that created by the ISP, many of their ideas were incorporated and expanded in this effort. All of the data from each quadrangle are being examined thoroughly in an attempt to eliminate problems, to combine partial or duplicate records, to convert all coding to a common scheme, and to identify problems even if they can not be solved at this time.
SnopViz, an interactive snow profile visualization tool
NASA Astrophysics Data System (ADS)
Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank
2016-04-01
SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).
Effect of reciprocating file motion on microcrack formation in root canals: an SEM study.
Ashwinkumar, V; Krithikadatta, J; Surendran, S; Velmurugan, N
2014-07-01
To compare dentinal microcrack formation whilst using Ni-Ti hand K-files, ProTaper hand and rotary files and the WaveOne reciprocating file. One hundred and fifty mandibular first molars were selected. Thirty teeth were left unprepared and served as controls, and the remaining 120 teeth were divided into four groups. Ni-Ti hand K-files, ProTaper hand files, ProTaper rotary files and WaveOne Primary reciprocating files were used to prepare the mesial canals. Roots were then sectioned 3, 6 and 9 mm from the apex, and the cut surface was observed under scanning electron microscope (SEM) and checked for the presence of dentinal microcracks. The control and Ni-Ti hand K-files groups were not associated with microcracks. In roots prepared with ProTaper hand files, ProTaper rotary files and WaveOne Primary reciprocating files, dentinal microcracks were present. There was a significant difference between control/Ni-Ti hand K-files group and ProTaper hand files/ProTaper rotary files/WaveOne Primary reciprocating file group (P < 0.001) with ProTaper rotary files producing the most microcracks. No significant difference was observed between teeth prepared with ProTaper hand files and WaveOne Primary reciprocating files. ProTaper rotary files were associated with significantly more microcracks than ProTaper hand files and WaveOne Primary reciprocating files. Ni-Ti hand K-files did not produce microcracks at any levels inside the root canals. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Digitized Database of Old Seismograms Recorder in Romania
NASA Astrophysics Data System (ADS)
Paulescu, Daniel; Rogozea, Maria; Popa, Mihaela; Radulian, Mircea
2016-08-01
The aim of this paper is to describe a managing system for a unique Romanian database of historical seismograms and complementary documentation (metadata) and its dissemination and analysis procedure. For this study, 5188 historical seismograms recorded between 1903 and 1957 by the Romanian seismological observatories (Bucharest-Filaret, Focşani, Bacău, Vrincioaia, Câmpulung-Muscel, Iaşi) were used. In order to reconsider the historical instrumental data, the analog seismograms are converted to digital images and digital waveforms (digitization/ vectorialisation). First, we applied a careful scanning procedure of the seismograms and related material (seismic bulletins, station books, etc.). In a next step, the high resolution scanned seismograms will be processed to obtain the digital/numeric waveforms. We used a Colortrac Smartlf Cx40 scanner which provides images in TIFF or JPG format. For digitization the algorithm Teseo2 developed by the National Institute of Geophysics and Volcanology in Rome (Italy), within the framework of the SISMOS Project, will be used.
Iurov, Iu B; Khazatskiĭ, I A; Akindinov, V A; Dovgilov, L V; Kobrinskiĭ, B A; Vorsanova, S G
2000-08-01
Original software FISHMet has been developed and tried for improving the efficiency of diagnosis of hereditary diseases caused by chromosome aberrations and for chromosome mapping by fluorescent in situ hybridization (FISH) method. The program allows creation and analysis of pseudocolor chromosome images and hybridization signals in the Windows 95 system, allows computer analysis and editing of the results of pseudocolor hybridization in situ, including successive imposition of initial black-and-white images created using fluorescent filters (blue, green, and red), and editing of each image individually or of a summary pseudocolor image in BMP, TIFF, and JPEG formats. Components of image computer analysis system (LOMO, Leitz Ortoplan, and Axioplan fluorescent microscopes, COHU 4910 and Sanyo VCB-3512P CCD cameras, Miro-Video, Scion LG-3 and VG-5 image capture maps, and Pentium 100 and Pentium 200 computers) and specialized software for image capture and visualization (Scion Image PC and Video-Cup) have been used with good results in the study.
DICOM to print, 35-mm slides, web, and video projector: tutorial using Adobe Photoshop.
Gurney, Jud W
2002-10-01
Preparing images for publication has dealt with film and the photographic process. With picture archiving and communications systems, many departments will no longer produce film. This will change how images are produced for publication. DICOM, the file format for radiographic images, has to be converted and then prepared for traditional publication, 35-mm slides, the newest techniques of video projection, and the World Wide Web. Tagged image file format is the common format for traditional print publication, whereas joint photographic expert group is the current file format for the World Wide Web. Each medium has specific requirements that can be met with a common image-editing program such as Adobe Photoshop (Adobe Systems, San Jose, CA). High-resolution images are required for print, a process that requires interpolation. However, the Internet requires images with a small file size for rapid transmission. The resolution of each output differs and the image resolution must be optimized to match the output of the publishing medium.
TADPLOT program, version 2.0: User's guide
NASA Technical Reports Server (NTRS)
Hammond, Dana P.
1991-01-01
The TADPLOT Program, Version 2.0 is described. The TADPLOT program is a software package coordinated by a single, easy-to-use interface, enabling the researcher to access several standard file formats, selectively collect specific subsets of data, and create full-featured publication and viewgraph quality plots. The user-interface was designed to be independent from any file format, yet provide capabilities to accommodate highly specialized data queries. Integrated with an applications software network, data can be assessed, collected, and viewed quickly and easily. Since the commands are data independent, subsequent modifications to the file format will be transparent, while additional file formats can be integrated with minimal impact on the user-interface. The graphical capabilities are independent of the method of data collection; thus, the data specification and subsequent plotting can be modified and upgraded as separate functional components. The graphics kernel selected adheres to the full functional specifications of the CORE standard. Both interface and postprocessing capabilities are fully integrated into TADPLOT.
A New Archive of UKIRT Legacy Data at CADC
NASA Astrophysics Data System (ADS)
Bell, G. S.; Currie, M. J.; Redman, R. O.; Purves, M.; Jenness, T.
2014-05-01
We describe a new archive of legacy data from the United Kingdom Infrared Telescope (UKIRT) at the Canadian Astronomy Data Centre (CADC) containing all available data from the Cassegrain instruments. The desire was to archive the raw data in as close to the original format as possible, so where the data followed our current convention of having a single data file per observation, it was archived without alteration, except for minor fixes to headers of data in FITS format to allow it to pass fitsverify and be accepted by CADC. Some of the older data comprised multiple integrations in separate files per observation, stored in either Starlink NDF or Figaro DST format. These were placed inside HDS container files, and DST files were rearranged into NDF format. The describing the observations is ingested into the CAOM-2 repository via an intermediate MongoDB header database, which will also be used to guide the ORAC-DR pipeline in generating reduced data products.
Converting CSV Files to RKSML Files
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Liebersbach, Robert
2009-01-01
A computer program converts, into a format suitable for processing on Earth, files of downlinked telemetric data pertaining to the operation of the Instrument Deployment Device (IDD), which is a robot arm on either of the Mars Explorer Rovers (MERs). The raw downlinked data files are in comma-separated- value (CSV) format. The present program converts the files into Rover Kinematics State Markup Language (RKSML), which is an Extensible Markup Language (XML) format that facilitates representation of operations of the IDD and enables analysis of the operations by means of the Rover Sequencing Validation Program (RSVP), which is used to build sequences of commanded operations for the MERs. After conversion by means of the present program, the downlinked data can be processed by RSVP, enabling the MER downlink operations team to play back the actual IDD activity represented by the telemetric data against the planned IDD activity. Thus, the present program enhances the diagnosis of anomalies that manifest themselves as differences between actual and planned IDD activities.
Fortran Program for X-Ray Photoelectron Spectroscopy Data Reformatting
NASA Technical Reports Server (NTRS)
Abel, Phillip B.
1989-01-01
A FORTRAN program has been written for use on an IBM PC/XT or AT or compatible microcomputer (personal computer, PC) that converts a column of ASCII-format numbers into a binary-format file suitable for interactive analysis on a Digital Equipment Corporation (DEC) computer running the VGS-5000 Enhanced Data Processing (EDP) software package. The incompatible floating-point number representations of the two computers were compared, and a subroutine was created to correctly store floating-point numbers on the IBM PC, which can be directly read by the DEC computer. Any file transfer protocol having provision for binary data can be used to transmit the resulting file from the PC to the DEC machine. The data file header required by the EDP programs for an x ray photoelectron spectrum is also written to the file. The user is prompted for the relevant experimental parameters, which are then properly coded into the format used internally by all of the VGS-5000 series EDP packages.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... file your comments electronically using the eFiling feature on the Commission's Web site ( www.ferc.gov ) under the link to Documents and Filings. With eFiling, you can provide comments in a variety of formats by attaching them as a file with your submission. New eFiling users must first create an account by...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
... on a project; (2) You can file your comments electronically using the eFiling feature located on the Commission's Web site ( www.ferc.gov ) under the Documents & Filings link. With eFiling, you can provide comments in a variety of formats by attaching them as a file with your submission. New eFiling users must...
A convertor and user interface to import CAD files into worldtoolkit virtual reality systems
NASA Technical Reports Server (NTRS)
Wang, Peter Hor-Ching
1996-01-01
Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C language interface. Using a CAD or modelling program to build a VW for WTK VR applications, we typically construct the stationary universe with all the geometric objects except the dynamic objects, and create each dynamic object in an individual file.
Code of Federal Regulations, 2010 CFR
2010-07-01
... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...
Code of Federal Regulations, 2013 CFR
2013-07-01
... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...
Code of Federal Regulations, 2011 CFR
2011-07-01
... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...
Code of Federal Regulations, 2014 CFR
2014-07-01
... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...
Code of Federal Regulations, 2012 CFR
2012-07-01
... that time, you must file your travel claim in the format prescribed by your agency. If the prescribed... travel claim in a specific format and must the claim be signed? 301-52.3 Section 301-52.3 Public Contracts and Property Management Federal Travel Regulation System TEMPORARY DUTY (TDY) TRAVEL ALLOWANCES...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... to Docket No. IC10-542-001. Comments may be filed either electronically or in paper format. Those persons filing electronically do not need to make a paper filing. Documents filed electronically via the... sender's e-mail address upon receipt of comments. For paper filings, the comments should be submitted to...
Data File Standard for Flow Cytometry, version FCS 3.1.
Spidlen, Josef; Moore, Wayne; Parks, David; Goldberg, Michael; Bray, Chris; Bierre, Pierre; Gorombey, Peter; Hyun, Bill; Hubbard, Mark; Lange, Simon; Lefebvre, Ray; Leif, Robert; Novo, David; Ostruszka, Leo; Treister, Adam; Wood, James; Murphy, Robert F; Roederer, Mario; Sudar, Damir; Zigon, Robert; Brinkman, Ryan R
2010-01-01
The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allows files created by one type of acquisition hardware and software to be analyzed by any other type.The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.
Data File Standard for Flow Cytometry, Version FCS 3.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spidlen, Josef; Moore, Wayne; Parks, David
2009-11-10
The flow cytometry data file standard provides the specifications needed to completely describe flow cytometry data sets within the confines of the file containing the experimental data. In 1984, the first Flow Cytometry Standard format for data files was adopted as FCS 1.0. This standard was modified in 1990 as FCS 2.0 and again in 1997 as FCS 3.0. We report here on the next generation flow cytometry standard data file format. FCS 3.1 is a minor revision based on suggested improvements from the community. The unchanged goal of the standard is to provide a uniform file format that allowsmore » files created by one type of acquisition hardware and software to be analyzed by any other type. The FCS 3.1 standard retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. The major changes include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about originality of the data file, and support for plate and well identification in high throughput, plate based experiments. Please see the normative version of the FCS 3.1 specification in Supporting Information for this manuscript (or at http://www.isac-net.org/ in the Current standards section) for a complete list of changes.« less
77 FR 12843 - Fees for Sanitation Inspections of Cruise Ships
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-02
... used to determine the fees: [GRAPHIC] [TIFF OMITTED] TN02MR12.006 The average cost per inspection is multiplied by size and cost factors to determine the fee for vessels in each size category. The size and cost... exists rodent, insect, or other vermin infestations, contaminated food or water, or other sanitary...
Adding Data Management Services to Parallel File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Scott
2015-03-04
The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less
... of running) so you don't breathe as hard. Avoid busy roads and highways where PM is usually worse because of emissions from cars and trucks. For more tools to help you learn about air quality, visit Tracking Air Quality . Top of Page File Formats Help: How do I view different file formats ( ...
Deep PDF parsing to extract features for detecting embedded malware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, Miles Arthur; Cross, Jesse S.
2011-09-01
The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benignmore » and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout the rest of this report. The features are extracted using an instrumented PDF viewer, and are the inputs to a prediction model that scores the likelihood of a PDF file containing malware. The prediction model is constructed from a sample of labeled data by a machine learning algorithm (specifically, decision tree ensemble learning). Preliminary experiments show that the model is able to detect half of the PDF malware in the corpus with zero false alarms. We conclude the report with suggestions for extending this work to detect a greater variety of PDF malware.« less
2014-12-01
format for the orientation of a body. It further recommends support- ing data be stored in a text PCK. These formats are used by the SPICE system...INTRODUCTION These file formats were developed for and are used by the SPICE system, developed by the Navigation and Ancillary Information Facility (NAIF...of NASA’s Jet Propulsion Laboratory (JPL). Most users will want to use either the SPICE libraries or CALCEPH, developed by the Institut de mécanique
Personalization of structural PDB files.
Woźniak, Tomasz; Adamiak, Ryszard W
2013-01-01
PDB format is most commonly applied by various programs to define three-dimensional structure of biomolecules. However, the programs often use different versions of the format. Thus far, no comprehensive solution for unifying the PDB formats has been developed. Here we present an open-source, Python-based tool called PDBinout for processing and conversion of various versions of PDB file format for biostructural applications. Moreover, PDBinout allows to create one's own PDB versions. PDBinout is freely available under the LGPL licence at http://pdbinout.ibch.poznan.pl.
14 CFR 221.30 - Passenger fares and charges.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Manner of Filing Tariffs § 221.30 Passenger fares and charges. (a... necessary to carry out the purposes of this part, the applicant carrier to file fare tariffs in a paper format. Such waivers shall only be considered where electronic filing, compared to paper filing, is...
GEWEX-RFA Data File Format and File Naming Convention
Atmospheric Science Data Center
2016-05-20
... documentation, will be stored for each data product. Each time data is added to, removed from, or modified in the file set for a product, ... including 29 days in leap-year Februaries. Time series files containing 15-minute data should start at the top of an hour to ...
TOLNet Data Format for Lidar Ozone Profile & Surface Observations
NASA Astrophysics Data System (ADS)
Chen, G.; Aknan, A. A.; Newchurch, M.; Leblanc, T.
2015-12-01
The Tropospheric Ozone Lidar Network (TOLNet) is an interagency initiative started by NASA, NOAA, and EPA in 2011. TOLNet currently has six Lidars and one ozonesonde station. TOLNet provides high-resolution spatio-temporal measurements of tropospheric (surface to tropopause) ozone and aerosol vertical profiles to address fundamental air-quality science questions. The TOLNet data format was developed by TOLNet members as a community standard for reporting ozone profile observations. The development of this new format was primarily based on the existing NDAAC (Network for the Detection of Atmospheric Composition Change) format and ICARTT (International Consortium for Atmospheric Research on Transport and Transformation) format. The main goal is to present the Lidar observations in self-describing and easy-to-use data files. The TOLNet format is an ASCII format containing a general file header, individual profile headers, and the profile data. The last two components repeat for all profiles recorded in the file. The TOLNet format is both human and machine readable as it adopts standard metadata entries and fixed variable names. In addition, software has been developed to check for format compliance. To be presented is a detailed description of the TOLNet format protocol and scanning software.
46 CFR 535.701 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., Washington, DC 20573-0001. A copy of the Monitoring Report form in Microsoft Word and Excel format may be... Monitoring Reports in the Commission's prescribed electronic format, either on diskette or CD-ROM. (e)(1) The... filed by this subpart may be filed by direct electronic transmission in lieu of hard copy. Detailed...
Standard Electronic Format Specification for Tank Characterization Data Loader Version 3.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
ADAMS, M.R.
2001-01-31
The purpose of this document is to describe the standard electronic format for data files that will be sent for entry into the Tank Characterization Database (TCD). There are 2 different file types needed for each data load: (1) Analytical Results and (2) Sample Descriptions.
47 CFR 1.913 - Application and notification forms; electronic and manual filing.
Code of Federal Regulations, 2011 CFR
2011-10-01
... notifications whenever possible. The files, other than the ASCII table of contents, should be in Adobe Acrobat... possible. The attachment should be uploaded via ULS in Adobe Acrobat Portable Document Format (PDF... the table of contents, should be in Adobe Acrobat Portable Document Format (PDF) whenever possible...
9 CFR 124.30 - Filing, format, and content of petitions.
Code of Federal Regulations, 2010 CFR
2010-01-01
... RESTORATION Due Diligence Petitions § 124.30 Filing, format, and content of petitions. (a) Any interested... diligence in seeking APHIS approval of the product during the regulatory review period. (b) The petition... subpart. (c) The petition must allege that the applicant failed to act with due diligence sometime during...
Viewing Files — EDRN Public Portal
In addition to standard HTML Web pages, our web site contain other file formats. You may need additional software or browser plug-ins to view some of the information available on our site. This document lists show each format, along with links to the corresponding freely available plug-ins or viewers.
Painless File Extraction: The A(rc)--Z(oo) of Internet Archive Formats.
ERIC Educational Resources Information Center
Simmonds, Curtis
1993-01-01
Discusses extraction programs needed to postprocess software downloaded from the Internet that has been archived and compressed for the purposes of storage and file transfer. Archiving formats for DOS, Macintosh, and UNIX operating systems are described; and cross-platform compression utilities are explained. (LRW)
SiLK: A Tool Suite for Unsampled Network Flow Analysis at Scale
2014-06-01
file format,” [Accessed: Feb 9, 2014]. [Online]. Available: https: //tools.netsa.cert.org/silk/faq.html#file-formats [12] “2012 data breach investigations...report (DBIR),” Verizon, Tech. Rep., 2012. [Online]. Available: http://www.verizonenterprise.com/DBIR/2012/ [13] “2013 data breach investigations
Özyürek, Taha; Tek, Vildan; Yılmaz, Koray; Uslu, Gülşah
2017-11-01
To determine the incidence of crack formation and propagation in apical root dentin after retreatment procedures performed using ProTaper Universal Retreatment (PTR), Mtwo-R, ProTaper Next (PTN), and Twisted File Adaptive (TFA) systems. The study consisted of 120 extracted mandibular premolars. One millimeter from the apex of each tooth was ground perpendicular to the long axis of the tooth, and the apical surface was polished. Twenty teeth served as the negative control group. One hundred teeth were prepared, obturated, and then divided into 5 retreatment groups. The retreatment procedures were performed using the following files: PTR, Mtwo-R, PTN, TFA, and hand files. After filling material removal, apical enlargement was done using apical size 0.50 mm ProTaper Universal (PTU), Mtwo, PTN, TFA, and hand files. Digital images of the apical root surfaces were recorded before preparation, after preparation, after obturation, after filling removal, and after apical enlargement using a stereomicroscope. The images were then inspected for the presence of new apical cracks and crack propagation. Data were analyzed with χ 2 tests using SPSS 21.0 software. New cracks and crack propagation occurred in all the experimental groups during the retreatment process. Nickel-titanium rotary file systems caused significantly more apical crack formation and propagation than the hand files. The PTU system caused significantly more apical cracks than the other groups after the apical enlargement stage. This study showed that retreatment procedures and apical enlargement after the use of retreatment files can cause crack formation and propagation in apical dentin.
NASA Technical Reports Server (NTRS)
Pototzky, Anthony S.
2010-01-01
A methodology is described for generating first-order plant equations of motion for aeroelastic and aeroservoelastic applications. The description begins with the process of generating data files representing specialized mode-shapes, such as rigid-body and control surface modes, using both PATRAN and NASTRAN analysis. NASTRAN executes the 146 solution sequence using numerous Direct Matrix Abstraction Program (DMAP) calls to import the mode-shape files and to perform the aeroelastic response analysis. The aeroelastic response analysis calculates and extracts structural frequencies, generalized masses, frequency-dependent generalized aerodynamic force (GAF) coefficients, sensor deflections and load coefficients data as text-formatted data files. The data files are then re-sequenced and re-formatted using a custom written FORTRAN program. The text-formatted data files are stored and coefficients for s-plane equations are fitted to the frequency-dependent GAF coefficients using two Interactions of Structures, Aerodynamics and Controls (ISAC) programs. With tabular files from stored data created by ISAC, MATLAB generates the first-order aeroservoelastic plant equations of motion. These equations include control-surface actuator, turbulence, sensor and load modeling. Altitude varying root-locus plot and PSD plot results for a model of the F-18 aircraft are presented to demonstrate the capability.
Extract and visualize geolocation from any text file
NASA Astrophysics Data System (ADS)
Boustani, M.
2015-12-01
There are variety of text file formats such as PDF, HTML and more which contains words about locations(countries, cities, regions and more). GeoParser developed as one of sub-projects under DARPA Memex to help finding any geolocation information crawled website data. It is a web application benefiting from Apache Tika to extract locations from any text file format and visualize geolocations on the map. https://github.com/MBoustani/GeoParserhttps://github.com/chrismattmann/tika-pythonhttp://www.darpa.mil/program/memex
1999-12-01
addition, the data files saved in the POINT format can include an optional header which is compatible with Amtec Engineering’s 2-D and 3-D visualization...34.DAT" file so that the file can be used directly by Amtec Engineering’s 2-D and 3-D visualization package Tecplot©. The ARRAY and POINT formats are
Can ASCII data files be standardized for Earth Science?
NASA Astrophysics Data System (ADS)
Evans, K. D.; Chen, G.; Wilson, A.; Law, E.; Olding, S. W.; Krotkov, N. A.; Conover, H.
2015-12-01
NASA's Earth Science Data Systems Working Groups (ESDSWG) was created over 10 years ago. The role of the ESDSWG is to make recommendations relevant to NASA's Earth science data systems from user experiences. Each group works independently focusing on a unique topic. Participation in ESDSWG groups comes from a variety of NASA-funded science and technology projects, such as MEaSUREs, NASA information technology experts, affiliated contractor, staff and other interested community members from academia and industry. Recommendations from the ESDSWG groups will enhance NASA's efforts to develop long term data products. Each year, the ESDSWG has a face-to-face meeting to discuss recommendations and future efforts. Last year's (2014) ASCII for Science Data Working Group (ASCII WG) completed its goals and made recommendations on a minimum set of information that is needed to make ASCII files at least human readable and usable for the foreseeable future. The 2014 ASCII WG created a table of ASCII files and their components as a means for understanding what kind of ASCII formats exist and what components they have in common. Using this table and adding information from other ASCII file formats, we will discuss the advantages and disadvantages of a standardized format. For instance, Space Geodesy scientists have been using the same RINEX/SINEX ASCII format for decades. Astronomers mostly archive their data in the FITS format. Yet Earth scientists seem to have a slew of ASCII formats, such as ICARTT, netCDF (an ASCII dump) and the IceBridge ASCII format. The 2015 Working Group is focusing on promoting extendibility and machine readability of ASCII data. Questions have been posed, including, Can we have a standardized ASCII file format? Can it be machine-readable and simultaneously human-readable? We will present a summary of the current used ASCII formats in terms of advantages and shortcomings, as well as potential improvements.
As-built design specification for PARCLS
NASA Technical Reports Server (NTRS)
Tompkins, M. A. (Principal Investigator)
1981-01-01
The PARCLS program, part of the CLASFYG package, reads a parameter file created by the CLASFYG program and a pure pixel ground truth file in order to create to classification file of three separate crop categories in universal format.
Analytic Patch Configuration (APC) gateway version 1.0 user's guide
NASA Technical Reports Server (NTRS)
Bingel, Bradford D.
1990-01-01
The Analytic Patch Configuration (APC) is an interactive software tool which translates aircraft configuration geometry files from one format into another. This initial release of the APC Gateway accommodates six formats: the four accepted APC formats (89f, 89fd, 89u, and 89ud), the PATRAN 2.x phase 1 neutral file format, and the Integrated Aerodynamic Analysis System (IAAS) General Geometry (GG) format. Written in ANSI FORTRAN 77 and completely self-contained, the APC Gateway is very portable and was already installed on CDC/NOS, VAX/VMS, SUN, SGI/IRIS, CONVEX, and GRAY hosts.
Integration of DICOM and openEHR standards
NASA Astrophysics Data System (ADS)
Wang, Ying; Yao, Zhihong; Liu, Lei
2011-03-01
The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.
78 FR 13933 - Railroad Cost of Capital-2012
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-01
... by May 31, 2013. ADDRESSES: Comments may be submitted either via the Board's e-filing system or in the traditional paper format. Any person using e-filing should comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...
76 FR 10430 - Railroad Cost of Capital-2010
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-24
... by June 8, 2011. ADDRESSES: Comments may be submitted either via the Board's e-filing system or in the traditional paper format. Any person using e-filing should comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...
75 FR 16894 - Railroad Cost of Capital-2009
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... 15, 2010. ADDRESSES: Comments may be submitted either via the Board's e-filing system or in the traditional paper format. Any person using e-filing should comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a filing in the...
5 CFR 1201.14 - Electronic filing procedures.
Code of Federal Regulations, 2014 CFR
2014-01-01
...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...
5 CFR 1201.14 - Electronic filing procedures.
Code of Federal Regulations, 2013 CFR
2013-01-01
...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...
5 CFR 1201.14 - Electronic filing procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...
5 CFR 1201.14 - Electronic filing procedures.
Code of Federal Regulations, 2012 CFR
2012-01-01
...-Appeal Online, in which case service is governed by paragraph (j) of this section, or by non-electronic... (PDF), and image files (files created by scanning). A list of formats allowed can be found at e-Appeal... representatives of the appeals in which they were filed. (j) Service of electronic pleadings and MSPB documents...
Converting Inhouse Subject Card Files to Electronic Keyword Files.
ERIC Educational Resources Information Center
Culmer, Carita M.
The library at Phoenix College developed the Controversial Issues Files (CIF), a "home made" card file containing references pertinent to specific ongoing assignments. Although the CIF had proven itself to be an excellent resource tool for beginning researchers, it was cumbersome to maintain in the card format, and was limited to very…
Networks for Autonomous Formation Flying Satellite Systems
NASA Technical Reports Server (NTRS)
Knoblock, Eric J.; Konangi, Vijay K.; Wallett, Thomas M.; Bhasin, Kul B.
2001-01-01
The performance of three communications networks to support autonomous multi-spacecraft formation flying systems is presented. All systems are comprised of a ten-satellite formation arranged in a star topology, with one of the satellites designated as the central or "mother ship." All data is routed through the mother ship to the terrestrial network. The first system uses a TCP/lP over ATM protocol architecture within the formation the second system uses the IEEE 802.11 protocol architecture within the formation and the last system uses both of the previous architectures with a constellation of geosynchronous satellites serving as an intermediate point-of-contact between the formation and the terrestrial network. The simulations consist of file transfers using either the File Transfer Protocol (FTP) or the Simple Automatic File Exchange (SAFE) Protocol. The results compare the IF queuing delay, and IP processing delay at the mother ship as well as application-level round-trip time for both systems, In all cases, using IEEE 802.11 within the formation yields less delay. Also, the throughput exhibited by SAFE is better than FTP.
37 CFR 1.615 - Format of papers filed in a supplemental examination proceeding.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 37 Patents, Trademarks, and Copyrights 1 2013-07-01 2013-07-01 false Format of papers filed in a supplemental examination proceeding. 1.615 Section 1.615 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES...
75 FR 14386 - Interpretation of Transmission Planning Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-25
... created electronically using word processing software should be filed in native applications or print-to.... FERC, 564 F.3d 1342 (DC Cir. 2009). \\6\\ Mandatory Reliability Standards for the Bulk-Power System... print-to-PDF format and not in a scanned format. Commenters filing electronically do not need to make a...
PROPOSED ST ANDARD TO GREA TL Y EXP AND PUBLIC ACCESS AND EXPLORATION OF TOXICITY DATA: EVALUATION OF STRUCTURE DATA FILE FORMAT
The ability to assess the potential toxicity of environmental, pharmaceutical, or industrial chemicals based on chemical structure in...
37 CFR 1.615 - Format of papers filed in a supplemental examination proceeding.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 37 Patents, Trademarks, and Copyrights 1 2014-07-01 2014-07-01 false Format of papers filed in a supplemental examination proceeding. 1.615 Section 1.615 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE GENERAL RULES OF PRACTICE IN PATENT CASES...
VizieR Online Data Catalog: Metal enrichment in semi-analytical model (Cousin+, 2016)
NASA Astrophysics Data System (ADS)
Cousin, M.; Buat, V.; Boissier, S.; Bethermin, M.; Roehlly, Y. Genois M.
2016-04-01
The repository contains outputs from the different models: - m1: Classical (only hot gas) isotropic accretion scenario + Standard Shmidt Kennicutt law - m2: Bimodal accretion (cold streams) + Standard Shmidt Kennicutt law - m3: Classical (only hot gas) isotropic accretion scenario + ad-hoc non-star forming gas reservoir - m4: Bimodal accretion (cold streams) + ad-hoc non-star forming gas reservoir For each model of these models dada are saved in eGalICS_m*.fits file. All these fits-formated files are compatible with the TOPCAT software available on: http://www.star.bris.ac.uk/~mbt/topcat/ We also provide, for each Initial Mass Function available, a set of two fits-formated files associated to the chemodynamical library presented in the paper. For these two files, data are available for all metallicity bins used. - masslossrates_IMF.fits: The instantaneous total ejecta rate associated to a SSP for the six different main-ISM elements. - SNratesIMF.fits: The total SN rate (SNII+SNIa [nb/Gyr]) associated to a SSP, individual contribution of SNII and SNIa are also given. These files are available for four different IMFs: Salpeter+55 (1955ApJ...121..161S), Chabrier+03 (2003PASP..115..763C), Kroupa+93 (2001MNRAS.322..231K) and Scalo+98 (1998ASPC..142..201S. Both ejecta rates and SN rates are computed for the complete list of stellar ages provided in the BC03 spectra library. They are saved in fits-formated files and structured with different extensions corresponding to the different initial stellar metallicity bins. We finally provide the median star formation history, the median gas accretion history and the metal enrichment histories associated to our MW-sisters sample: MWsistershistories.dat If you used data associated to eGalICS semi-analytic model, please cite the following paper: Cousin et al., 2015A&A...575A..33C, "Toward a new modelling of gas flows in a semi-analytical model of galaxy formation and evolution" (3 data files).
IVS Working Group 4: VLBI Data Structures
NASA Astrophysics Data System (ADS)
Gipson, J.
2012-12-01
I present an overview of the "openDB format" for storing, archiving, and processing VLBI data. In this scheme, most VLBI data is stored in NetCDF files. NetCDF has the advantage that there are interfaces to most common computer languages including Fortran, Fortran-90, C, C++, Perl, etc, and the most common operating systems including Linux, Windows, and Mac. The data files for a particular session are organized by special ASCII "wrapper" files which contain pointers to the data files. This allows great flexibility in the processing and analysis of VLBI data. For example it allows you to easily change subsets of the data used in the analysis such as troposphere modeling, ionospheric calibration, editing, and ambiguity resolution. It also allows for extending the types of data used, e.g., source maps. I present a roadmap to transition to this new format. The new format can already be used by VieVS and by the global mode of solve. There are plans in work for other software packages to be able to use the new format.
CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.
Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R
2011-05-01
NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.
Kodak's New Photo CD Portfolio: Multimedia for the Rest of Us.
ERIC Educational Resources Information Center
Bonime, Andrew
1994-01-01
Describes Photo CD Portfolio, an Eastman Kodak product that provides interactive multimedia CD-ROM production capability. The article focuses on the capabilities of the tool's simplest authoring system, Create It, which allows users to work with Photo CD, PICT, or TIFF images, add graphics, text and audio, and create menus with branching. (KRN)
Segy-change: The swiss army knife for the SEG-Y files
NASA Astrophysics Data System (ADS)
Stanghellini, Giuseppe; Carrara, Gabriela
Data collected during active and passive seismic surveys can be stored in many different, more or less standard, formats. One of the most popular is the SEG-Y format, developed since 1975 to store single-line seismic digital data on tapes, and now evolved to store them into hard-disk and other media as well. Unfortunately, sometimes, files that are claimed to be recorded in the SEG-Y format cannot be processed using available free or industrial packages. Aiming to solve this impasse we present segy-change, a pre-processing software program to view, analyze, change and fix errors present in SEG-Y data files. It is written in C language and it can be used also as a software library and is compatible with most operating systems. Segy-change allows the user to display and optionally change the values inside all parts of a SEG-Y file: the file header, the trace headers and the data blocks. In addition, it allows to do a quality check on the data by plotting the traces. We provide instructions and examples on how to use the software.
VizieR Online Data Catalog: Opacities from the Opacity Project (Seaton+, 1995)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
1997-08-01
1 CODES. ***** 1.1 Code rop.for ************ This code reads opacity files written in standard OP format. Its main purpose is to provide documentation on the contents of the files. This code, like the other codes provided, prompts for the name of the file (or files) to be read. The file names read in response to the prompt may have up to 128 characters. 1.2 Code opfit.for ************** This code reads opacity files in standard OP format, and provides for interpolation of opacities to any required values of temperature and mass-density. The method used is described in OPF. The code prompts for the name of a file giving all required control parameters. As an example, the file opfit.dat is provided (users will need to change directory names and file names). The use of opfit.for is illustrated using opfit.dat. Most users will probably want to adapt opfit.for for use as a subroutine in other codes. Timings for DEC 7000 ALPHA: 0.3 sec for data read and initialisations; then 0.0007 sec for each temperature-density point. Users who like OPAL formats should note that opfit.for has a facility to produce files of OP data in OPAL-type formats. 1.3 Code ixz.for ************ This code provides for interpolations to any required values of X and Z. See IXZ. It prompts for the name of a file giving all required control parameters. An example of such a file if provided, ixz.dat (the user will need to change directory and file names). The output files have names s92INT.'nnn'. The user specifies the first value of nnn, and the number of files to be produced. 2. DATA FILES ********** 2.1 Data files for solar metal-mix ****************************** Data for solar metal-mix s92 as defined in SYMP. These files are from version 2 runs of December 1994 (see IXZ for details on Version 2). There are 213 files with names s92.'nnn', 'nnn'=201 to 413. Each file occupies 83762 bytes. The file s92.version2 gives values of X (hydrogen mass-faction) and Z (metals mass-fraction) for each value of 'nnn'. The user can get s92.version2, select the values of 'nnn' required, then get the required files s92.'nnn'. The user can see the file in ftp, displayed on the screen, by typing "get s92.version2 -". The files s92.'nnn' can be used with opfit.for to obtain opacities for any requires value of temperature and mass density. Files for other metal-mixtures will be added in due course. Send requests to mjs@star.ucl.ac.uk. 2.2 Files for interpolation in X and Z ********************************** The data files have names s92xz.'mmm', where 'mmm'=001 to 096. They differ from the standard OP files (such as s92.'nnn' --- section 2.1 above) in that they contain information giving derivatives of opacities with respect to X and Z. Each file s92xz.'mmm' occupies 148241 bytes. The interpolations to any required values of X and Z are made using ixz.for. Timings: on DEC 7000 ALPHA, 2.16 sec for each new-mixture file. For interpolations to some specified values of X and Z, one requires just 4 files s92xz.'mmm'. Most users will not require the complete set of files s92xz.'mmm'. The file s92xz.index includes a table (starting on line 3) giving values, for each 'mmm' file, of x,y,z (abundances by number-factions) and X,Y,Z (abundances by mass-fractions). Users are advised to get the file s92.index, and select values of 'mmm' for files required, then get those files. The files produced by ixz.for are in standard OP format and can be used with opfit.for to obtain opacities for any required values of temperature and mass density. 3 RECOMMENDED PROCEDURE FOR USE OF OPACITY FILES ********************************************** (1) Get the file s92.version2. (2) If the values of X and Z you require are available in the files s92.'nnn' then get those files. (3) If not, get the file s92xz.index. (4) Select from s92xz.index the values of 'mmm' which cover the range of X and Z in which your are interested. Get those files and use ixz.for to generate files for your exact required values of X and Z. (5) Note that the exact abundance mixtures used are specified in each file (see rop.for). Also each run of opfit.for produces a table of abundances. (6) If you want a metal-mix different from that of s92, contact mjs@star.ucl.ac.uk. 4 FUTURE DEVELOPMENTS ******************* (1) Data for the calculation of radiative forces are provided as the CDS catalog
Web Standard: PDF - When to Use, Document Metadata, PDF Sections
PDF files provide some benefits when used appropriately. PDF files should not be used for short documents ( 5 pages) unless retaining the format for printing is important. PDFs should have internal file metadata and meet section 508 standards.
Guide to GFS History File Change on May 1, 2007
Guide to GFS History File Change on May 1, 2007 On May 1, 2007 12Z, the GFS had a major change. The change caused the internal binary GFS history file to change formats. The file is still in spectral space but now pressure is calculated in a different way. Sometime in the future, the GFS history file may be
FGGE/ERBZ tape specification and shipping letter description
NASA Technical Reports Server (NTRS)
Han, D.; Lo, H.
1983-01-01
The FGGE/ERBZ tape contains 5 parameters which are extracted and reformatted from the Nimbus-7 ERB Zonal Means Tape. There are three types of files on a FGGE/ERBZ tape: a tape header file, and data files. Physical characteristics, gross format, and file specifications are given. A sample tape check/document printout (shipping letter) is included.
. These tables may be defined within a separate ASCII text file (see Description and Format of BUFR Tables time, the BUFR tables are usually read from an external ASCII text file (although it is also possible reports. Click here to view the ASCII text file (called /nwprod/fix/bufrtab.002 on the NCEP CCS machines
75 FR 45609 - Commission Information Collection Activities (FERC-542); Comment Request; Extension
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... electronically (eFiled) or in paper format, and should refer to Docket No. IC10-542-000. Documents must be.... Commenters making an eFiling should not make a paper filing. Commenters that are not able to file electronically must send an original and two (2) paper copies of their comments to: Federal Energy Regulatory...
Global Paleoclimatic Data for 6000 Yr B.P. (1985) (NDP-011)
Webb, III, T. [Department of Geological Sciences, Brown University, Providence, Rhode Island (USA)
2012-01-01
To determine regional and global climatic variations during the past 6000 years, pollen, lake level, and marine plankton data from 797 stations were compiled to form a global data set. Radiocarbon dating and dated tephras were used to determine the ages of the specimens. The data available for the pollen data are site number, site name, latitude, longitude, elevation, and percentages of various taxa. For lake-level data, the data are site number, site name, latitude, longitude, and lake-level status. And for marine plankton, the data are site number, site name, latitude, longitude, water depth, date, dating control code, depth of sample, interpolated age of sample, estimated winter and summer sea-surface temperatures, and percentages of various taxa. The data are in 55 files: 5 files for each of 9 geographic regions and 10 supplemental files. The files for each region include (1) a FORMAT file describing the format and contents of the data for that region, (2) an INDEX file containing descriptive information about each site and its data, (3) a DATA file containing the data and available climatic estimates, (4) a PUBINDEX file indexing the bibliographic references associated with each site, and (5) a REFERENCE file containing the bibliographic references. The files range in size from 2 to 66 kB.
PATSTAGS - PATRAN-STAGSC-1 TRANSLATOR
NASA Technical Reports Server (NTRS)
Otte, N. E.
1994-01-01
PATSTAGS translates PATRAN finite model data into STAGS (Structural Analysis of General Shells) input records to be used for engineering analysis. The program reads data from a PATRAN neutral file and writes STAGS input records into a STAGS input file and a UPRESS data file. It is able to support translations of nodal constraints, nodal, element, force and pressure data. PATSTAGS uses three files: the PATRAN neutral file to be translated, a STAGS input file and a STAGS pressure data file. The user provides the names for the neutral file and the desired names of the STAGS files to be created. The pressure data file contains the element live pressure data used in the STAGS subroutine UPRESS. PATSTAGS is written in FORTRAN 77 for DEC VAX series computers running VMS. The main memory requirement for execution is approximately 790K of virtual memory. Output blocks can be modified to output the data in any format desired, allowing the program to be used to translate model data to analysis codes other than STAGSC-1 (HQN-10967). This program is available in DEC VAX BACKUP format on a 9-track magnetic tape or TK50 tape cartridge. Documentation is included in the price of the program. PATSTAGS was developed in 1990. DEC, VAX, TK50 and VMS are trademarks of Digital Equipment Corporation.
Tsuru, Satoko; Okamine, Eiko; Takada, Aya; Watanabe, Chitose; Uchiyama, Makiko; Dannoue, Hideo; Aoyagi, Hisae; Endo, Akira
2009-01-01
Nursing Action Master and Nursing Observation Master were released from 2002 to 2008. Two kinds of format, an Excel format and a CSV format file are prepared for maintaining them. Followings were decided as a basic rule of the maintenance: newly addition, revision, deletion, the numbering of the management and a rule of the coding. The master was developed based on it. We do quality assurance for the masters using these rules.
,
2006-01-01
This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).
75 FR 71625 - System Restoration Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-24
... processing software should be filed in native applications or print-to-PDF format, and not in a scanned... (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (D.C. Cir. 2009). 6. On March 16, 2007, the... electronically using word processing software should be filed in native applications or print-to-PDF format, and...
75 FR 81152 - Interpretation of Protection System Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
... created electronically using word processing software should be filed in native applications or print-to... reh'g & compliance, 117 FERC ] 61,126 (2006), aff'd sub nom. Alcoa, Inc. v. FERC, 564 F.3d 1342 (DC... print-to-PDF format and not in a scanned format, at http://www.ferc.gov/docs-filing/efiling.asp . Mail...
78 FR 4766 - Adoption of Updated EDGAR Filer Manual
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... primarily to introduce the new EDGARLink Online submission type IRANNOTICE; and support PDF as an official... Portable Document Format (PDF) as an official filing format. EDGAR will continue to accept ASCII and HTML...) and 101 (17 CFR 232.101) of Regulation S-T and the EDGAR Filer Manual relating to the use of PDF files...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... Systems in 1993 for document exchange. PDF captures formatting information from a variety of desktop publishing applications, making it possible to send formatted documents and have them appear on the recipient... Administrative Procedure Act generally requires that an agency publish an adopted rule in the Federal Register 30...
View From Within 'Perseverance Valley' on Mars
2017-12-06
This view from within "Perseverance Valley," on the inner slope of the western rim of Endurance Crater on Mars, includes wheel tracks from the Opportunity rover's descent of the valley. The Panoramic Camera (Pancam) on Opportunity's mast took the component images of the scene during the period Sept. 4 through Oct. 6, 2017, corresponding to sols (Martian days) 4840 through 4871 of the rover's work on Mars. Perseverance Valley is a system of shallow troughs descending eastward about the length of two football fields from the crest of the crater rim to the floor of the crater. This panorama spans from northeast on the left to northwest on the right, including portions of the crater floor (eastward) in the left half and of the rim (westward) in the right half. Opportunity began descending Perseverance Valley in mid-2017 (see map) as part of an investigation into how the valley formed. Rover wheel tracks are darker brown, between two patches of bright bedrock, receding toward the horizon in the right half of the scene. This view combines multiple images taken through three different Pancam filters. The selected filters admit light centered on wavelengths of 753 nanometers (near-infrared), 535 nanometers (green) and 432 nanometers (violet). The three color bands are combined here to show approximately true color. A map and high-resolution TIFF file is available at https://photojournal.jpl.nasa.gov/catalog/PIA22074
COMBINE archive and OMEX format: one file to share all information to reproduce a modeling project.
Bergmann, Frank T; Adams, Richard; Moodie, Stuart; Cooper, Jonathan; Glont, Mihai; Golebiewski, Martin; Hucka, Michael; Laibe, Camille; Miller, Andrew K; Nickerson, David P; Olivier, Brett G; Rodriguez, Nicolas; Sauro, Herbert M; Scharm, Martin; Soiland-Reyes, Stian; Waltemath, Dagmar; Yvon, Florent; Le Novère, Nicolas
2014-12-14
With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.
NMReDATA, a standard to report the NMR assignment and parameters of organic compounds.
Pupier, Marion; Nuzillard, Jean-Marc; Wist, Julien; Schlörer, Nils E; Kuhn, Stefan; Erdelyi, Mate; Steinbeck, Christoph; Williams, Antony J; Butts, Craig; Claridge, Tim D W; Mikhova, Bozhana; Robien, Wolfgang; Dashti, Hesam; Eghbalnia, Hamid R; Farès, Christophe; Adam, Christian; Kessler, Pavel; Moriaud, Fabrice; Elyashberg, Mikhail; Argyropoulos, Dimitris; Pérez, Manuel; Giraudeau, Patrick; Gil, Roberto R; Trevorrow, Paul; Jeannerat, Damien
2018-04-14
Even though NMR has found countless applications in the field of small molecule characterization, there is no standard file format available for the NMR data relevant to structure characterization of small molecules. A new format is therefore introduced to associate the NMR parameters extracted from 1D and 2D spectra of organic compounds to the proposed chemical structure. These NMR parameters, which we shall call NMReDATA (for nuclear magnetic resonance extracted data), include chemical shift values, signal integrals, intensities, multiplicities, scalar coupling constants, lists of 2D correlations, relaxation times, and diffusion rates. The file format is an extension of the existing Structure Data Format, which is compatible with the commonly used MOL format. The association of an NMReDATA file with the raw and spectral data from which it originates constitutes an NMR record. This format is easily readable by humans and computers and provides a simple and efficient way for disseminating results of structural chemistry investigations, allowing automatic verification of published results, and for assisting the constitution of highly needed open-source structural databases. Copyright © 2018 John Wiley & Sons, Ltd.
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2017-01-01
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon-HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon-HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon-HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5. PMID:28649160
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-02-13
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon-HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon-HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon-HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5.
NASA Astrophysics Data System (ADS)
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-02-01
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon- HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon- HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon- HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5.
Petroleum system modeling of the western Canada sedimentary basin - isopach grid files
Higley, Debra K.; Henry, Mitchell E.; Roberts, Laura N.R.
2005-01-01
This publication contains zmap-format grid files of isopach intervals that represent strata associated with Devonian to Holocene petroleum systems of the Western Canada Sedimentary Basin (WCSB) of Alberta, British Columbia, and Saskatchewan, Canada. Also included is one grid file that represents elevations relative to sea level of the top of the Lower Cretaceous Mannville Group. Vertical and lateral scales are in meters. The age range represented by the stratigraphic intervals comprising the grid files is 373 million years ago (Ma) to present day. File names, age ranges, formation intervals, and primary petroleum system elements are listed in table 1. Metadata associated with this publication includes information on the study area and the zmap-format files. The digital files listed in table 1 were compiled as part of the Petroleum Processes Research Project being conducted by the Central Energy Resources Team of the U.S. Geological Survey, which focuses on modeling petroleum generation, 3 migration, and accumulation through time for petroleum systems of the WCSB. Primary purposes of the WCSB study are to Construct the 1-D/2-D/3-D petroleum system models of the WCSB. Actual boundaries of the study area are documented within the metadata; excluded are northern Alberta and eastern Saskatchewan, but fringing areas of the United States are included.Publish results of the research and the grid files generated for use in the 3-D model of the WCSB.Evaluate the use of petroleum system modeling in assessing undiscovered oil and gas resources for geologic provinces across the World.
SEDIMENT DATA - ST. PAUL WATERWAY - TACOMA, WA - 1996 MONITORING DATA
Benthic Infauna Monitoring Data Files are Excel-format spreadsheet files which contain data presented in the St. Paul Waterway Area Remedial Action and Habitat Restoration Project, 1996 Monitoring Report. The files can be viewed directly or readily downlo aded and read into most ...
Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations
Buscheck, Thomas A.
2012-01-01
Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk : FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.
Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations
Buscheck, Thomas A.
2000-01-01
Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk: FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.
Developing a radiology-based teaching approach for gross anatomy in the digital era.
Marker, David R; Bansal, Anshuman K; Juluru, Krishna; Magid, Donna
2010-08-01
The purpose of this study was to assess the implementation of a digital anatomy lecture series based largely on annotated, radiographic images and the utility of the Radiological Society of North America-developed Medical Imaging Resource Center (MIRC) for providing an online educational resource. A series of digital teaching images were collected and organized to correspond to lecture and dissection topics. MIRC was used to provide the images in a Web-based educational format for incorporation into anatomy lectures and as a review resource. A survey assessed the impressions of the medical students regarding this educational format. MIRC teaching files were successfully used in our teaching approach. The lectures were interactive with questions to and from the medical student audience regarding the labeled images used in the presentation. Eighty-five of 120 students completed the survey. The majority of students (87%) indicated that the MIRC teaching files were "somewhat useful" to "very useful" when incorporated into the lecture. The students who used the MIRC files were most likely to access the material from home (82%) on an occasional basis (76%). With regard to areas for improvement, 63% of the students reported that they would have benefited from more teaching files, and only 9% of the students indicated that the online files were not user friendly. The combination of electronic radiology resources available in lecture format and on the Internet can provide multiple opportunities for medical students to learn and revisit first-year anatomy. MIRC provides a user-friendly format for presenting radiology education files for medical students. 2010 AUR. Published by Elsevier Inc. All rights reserved.
Parser Combinators: a Practical Application for Generating Parsers for NMR Data
Fenwick, Matthew; Weatherby, Gerard; Ellis, Heidi JC; Gryk, Michael R.
2013-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is a technique for acquiring protein data at atomic resolution and determining the three-dimensional structure of large protein molecules. A typical structure determination process results in the deposition of a large data sets to the BMRB (Bio-Magnetic Resonance Data Bank). This data is stored and shared in a file format called NMR-Star. This format is syntactically and semantically complex making it challenging to parse. Nevertheless, parsing these files is crucial to applying the vast amounts of biological information stored in NMR-Star files, allowing researchers to harness the results of previous studies to direct and validate future work. One powerful approach for parsing files is to apply a Backus-Naur Form (BNF) grammar, which is a high-level model of a file format. Translation of the grammatical model to an executable parser may be automatically accomplished. This paper will show how we applied a model BNF grammar of the NMR-Star format to create a free, open-source parser, using a method that originated in the functional programming world known as “parser combinators”. This paper demonstrates the effectiveness of a principled approach to file specification and parsing. This paper also builds upon our previous work [1], in that 1) it applies concepts from Functional Programming (which is relevant even though the implementation language, Java, is more mainstream than Functional Programming), and 2) all work and accomplishments from this project will be made available under standard open source licenses to provide the community with the opportunity to learn from our techniques and methods. PMID:24352525
VizieR Online Data Catalog: Infrared Arcturus Atlas (Hinkle+ 1995)
NASA Astrophysics Data System (ADS)
Hinkle, K.; Wallace, L.; Livingston, W.
1996-01-01
The atlas is contained in 310 spectral files a list of line identifications, plus a file containing a list of the files and unobserved spectral regions. The spectral file names are in the form 'abnnnnn' where 'nnnnn' denotes the spectral region, e.g. file 'ab4300' contains spectra for the 4300-4325 cm-1 range. The atomic and molecular line identifications are in files 'appendix.a' and 'appendix.b', and repeated with a uniform format in file 'lines'. The file 'appendix.c' is a book-keeping device used to correlate the plot plages and spectral files with frequency. See the author-supplied description in 'readme.dat' for more information. (311 data files).
Strategies for Sharing Seismic Data Among Multiple Computer Platforms
NASA Astrophysics Data System (ADS)
Baker, L. M.; Fletcher, J. B.
2001-12-01
Seismic waveform data is readily available from a variety of sources, but it often comes in a distinct, instrument-specific data format. For example, data may be from portable seismographs, such as those made by Refraction Technology or Kinemetrics, from permanent seismograph arrays, such as the USGS Parkfield Dense Array, from public data centers, such as the IRIS Data Center, or from personal communication with other researchers through e-mail or ftp. A computer must be selected to import the data - usually whichever is the most suitable for reading the originating format. However, the computer best suited for a specific analysis may not be the same. When copies of the data are then made for analysis, a proliferation of copies of the same data results, in possibly incompatible, computer-specific formats. In addition, if an error is detected and corrected in one copy, or some other change is made, all the other copies must be updated to preserve their validity. Keeping track of what data is available, where it is located, and which copy is authoritative requires an effort that is easy to neglect. We solve this problem by importing waveform data to a shared network file server that is accessible to all our computers on our campus LAN. We use a Network Appliance file server running Sun's Network File System (NFS) software. Using an NFS client software package on each analysis computer, waveform data can then be read by our MatLab or Fortran applications without first copying the data. Since there is a single copy of the waveform data in a single location, the NFS file system hierarchy provides an implicit complete waveform data catalog and the single copy is inherently authoritative. Another part of our solution is to convert the original data into a blocked-binary format (known historically as USGS DR100 or VFBB format) that is interpreted by MatLab or Fortran library routines available on each computer so that the idiosyncrasies of each machine are not visible to the user. Commercial software packages, such as MatLab, also have the ability to share data in their own formats across multiple computer platforms. Our Fortran applications can create plot files in Adobe PostScript, Illustrator, and Portable Document Format (PDF) formats. Vendor support for reading these files is readily available on multiple computer platforms. We will illustrate by example our strategies for sharing seismic data among our multiple computer platforms, and we will discuss our positive and negative experiences. We will include our solutions for handling the different byte ordering, floating-point formats, and text file ``end-of-line'' conventions on the various computer platforms we use (6 different operating systems on 5 processor architectures).
Enabling Discoveries in Earth Sciences Through the Geosciences Network (GEON)
NASA Astrophysics Data System (ADS)
Seber, D.; Baru, C.; Memon, A.; Lin, K.; Youn, C.
2005-12-01
Taking advantage of the state-of-the-art information technology resources GEON researchers are building a cyberinfrastructure designed to enable data sharing, semantic data integration, high-end computations and 4D visualization in easy-to-use web-based environments. The GEON Network currently allows users to search and register Earth science resources such as data sets (GIS layers, GMT files, geoTIFF images, ASCII files, relational databases etc), software applications or ontologies. Portal based access mechanisms enable developers to built dynamic user interfaces to conduct advanced processing and modeling efforts across distributed computers and supercomputers. Researchers and educators can access the networked resources through the GEON portal and its portlets that were developed to conduct better and more comprehensive science and educational studies. For example, the SYNSEIS portlet in GEON enables users to access in near-real time seismic waveforms from the IRIS Data Management Center, easily build a 3D geologic model within the area of the seismic station(s) and the epicenter and perform a 3D synthetic seismogram analysis to understand the lithospheric structure and earthquake source parameters for any given earthquake in the US. Similarly, GEON's workbench area enables users to create their own work environment and copy, visualize and analyze any data sets within the network, and create subsets of the data sets for their own purposes. Since all these resources are built as part of a Service-oriented Architecture (SOA), they are also used in other development platforms. One such platform is Kepler Workflow system which can access web service based resources and provides users with graphical programming interfaces to build a model to conduct computations and/or visualization efforts using the networked resources. Developments in the area of semantic integration of the networked datasets continue to advance and prototype studies can be accessed via the GEON portal at www.geongrid.org
Natural-Color Image Mosaics of Afghanistan: Digital Databases and Maps
Davis, Philip A.; Hare, Trent M.
2007-01-01
Explanation: The 50 tiled images in this dataset are natural-color renditions of the calibrated six-band Landsat mosaics created from Landsat Enhanced Thematic Mapper Plus (ETM+) data. Natural-color images depict the surface as seen by the human eye. The calibration of the Landsat ETM+ maps produced by Davis (2006) are relative reflectance and need to be grounded with ground-reflectance data, but the difficulties in performing fieldwork in Afghanistan precluded ground-reflectance surveys. For natural color calibration, which involves only the blue, green, and red color bands of Landsat, we could use ground photographs, Munsell color readings of ground surfaces, or another image base that accurately depicts the surface color. Each map quadrangle is 1? of latitude by? of longitude. The numbers assigned to each map quadrangle refer to the latitude and longitude coordinates of the lower left corner of the quadrangle. For example, quadrangle Q2960 has its lower left corner at lat 29? N., long 60? E. Each quadrangle overlaps adjacent quadrangles by 100 pixels (2.85 km). Only the 14.25-m-spacial-resolution UTM and 28.5-m-spacial-resolution WGS84 geographic geotiff datasets are available in this report to decrease the amount of space needed. The images are (three-band, eight-bit) geotiffs with embedded georeferencing. As such, most software will not require the associated world files. An index of all available images in geographic is displayed here: Index_Geo_DD.pdf. The country of Afghanistan spans three UTM zones: (41-43). Maps are stored as geoTIFFs in their respective UTM zone projection. Indexes of all available topographic map sheets in their respective UTM zone are displayed here: Index_UTM_Z41.pdf, Index_UTM_Z42.pdf, Index_UTM_Z43.pdf. You will need Adobe Reader to view the PDF files. Download a copy of the latest version of Adobe Reader for free.
Geologic map of the Valjean Hills 7.5' quadrangle, San Bernardino County, California
Calzia, J.P.; Troxel, Bennie W.; digital database by Raumann, Christian G.
2003-01-01
FGDC-compliant metadata for the ARC/INFO coverages. The Correlation of Map Units and Description of Map Units is in the editorial format of USGS Geologic Investigations Series (I-series) maps but has not been edited to comply with I-map standards. Within the geologic map data package, map units are identified by standard geologic map criteria such as formation-name, age, and lithology. Even though this is an Open-File Report and includes the standard USGS Open-File disclaimer, the report closely adheres to the stratigraphic nomenclature of the U.S. Geological Survey. Descriptions of units can be obtained by viewing or plotting the .pdf file (3 above) or plotting the postscript file (2 above).
PDB Editor: a user-friendly Java-based Protein Data Bank file editor with a GUI.
Lee, Jonas; Kim, Sung Hou
2009-04-01
The Protein Data Bank file format is the format most widely used by protein crystallographers and biologists to disseminate and manipulate protein structures. Despite this, there are few user-friendly software packages available to efficiently edit and extract raw information from PDB files. This limitation often leads to many protein crystallographers wasting significant time manually editing PDB files. PDB Editor, written in Java Swing GUI, allows the user to selectively search, select, extract and edit information in parallel. Furthermore, the program is a stand-alone application written in Java which frees users from the hassles associated with platform/operating system-dependent installation and usage. PDB Editor can be downloaded from http://sourceforge.net/projects/pdbeditorjl/.
Cánovas, Rodrigo; Moffat, Alistair; Turpin, Andrew
2016-12-15
Next generation sequencing machines produce vast amounts of genomic data. For the data to be useful, it is essential that it can be stored and manipulated efficiently. This work responds to the combined challenge of compressing genomic data, while providing fast access to regions of interest, without necessitating decompression of whole files. We describe CSAM (Compressed SAM format), a compression approach offering lossless and lossy compression for SAM files. The structures and techniques proposed are suitable for representing SAM files, as well as supporting fast access to the compressed information. They generate more compact lossless representations than BAM, which is currently the preferred lossless compressed SAM-equivalent format; and are self-contained, that is, they do not depend on any external resources to compress or decompress SAM files. An implementation is available at https://github.com/rcanovas/libCSAM CONTACT: canovas-ba@lirmm.frSupplementary Information: Supplementary data is available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Software for Automated Reading of STEP Files by I-DEAS(trademark)
NASA Technical Reports Server (NTRS)
Pinedo, John
2003-01-01
A program called "readstep" enables the I-DEAS(tm) computer-aided-design (CAD) software to automatically read Standard for the Exchange of Product Model Data (STEP) files. (The STEP format is one of several used to transfer data between dissimilar CAD programs.) Prior to the development of "readstep," it was necessary to read STEP files into I-DEAS(tm) one at a time in a slow process that required repeated intervention by the user. In operation, "readstep" prompts the user for the location of the desired STEP files and the names of the I-DEAS(tm) project and model file, then generates an I-DEAS(tm) program file called "readstep.prg" and two Unix shell programs called "runner" and "controller." The program "runner" runs I-DEAS(tm) sessions that execute readstep.prg, while "controller" controls the execution of "runner" and edits readstep.prg if necessary. The user sets "runner" and "controller" into execution simultaneously, and then no further intervention by the user is required. When "runner" has finished, the user should see only parts from successfully read STEP files present in the model file. STEP files that could not be read successfully (e.g., because of format errors) should be regenerated before attempting to read them again.
Shuttle Data Center File-Processing Tool in Java
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Miller, Walter H.
2006-01-01
A Java-language computer program has been written to facilitate mining of data in files in the Shuttle Data Center (SDC) archives. This program can be executed on a variety of workstations or via Web-browser programs. This program is partly similar to prior C-language programs used for the same purpose, while differing from those programs in that it exploits the platform-neutrality of Java in implementing several features that are important for analysis of large sets of time-series data. The program supports regular expression queries of SDC archive files, reads the files, interleaves the time-stamped samples according to a chosen output, then transforms the results into that format. A user can choose among a variety of output file formats that are useful for diverse purposes, including plotting, Markov modeling, multivariate density estimation, and wavelet multiresolution analysis, as well as for playback of data in support of simulation and testing.
Users' Manual and Installation Guide for the EverVIEW Slice and Dice Tool (Version 1.0 Beta)
Roszell, Dustin; Conzelmann, Craig; Chimmula, Sumani; Chandrasekaran, Anuradha; Hunnicut, Christina
2009-01-01
Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need arose for additional tools to view and manipulate NetCDF datasets, specifically to create subsets of large NetCDF files. To address this need, we created the EverVIEW Slice and Dice Tool to allow users to create subsets of grid-based NetCDF files. The major functions of this tool are (1) to subset NetCDF files both spatially and temporally; (2) to view the NetCDF data in table form; and (3) to export filtered data to a comma-separated value file format.
HDF4 Maps: For Now and For the Future
NASA Astrophysics Data System (ADS)
Plutchak, J.; Aydt, R.; Folk, M. J.
2013-12-01
Data formats and access tools necessarily change as technology improves to address emerging requirements with new capabilities. This on-going process inevitably leaves behind significant data collections in legacy formats that are difficult to support and sustain. NASA ESDIS and The HDF Group currently face this problem with large and growing archives of data in HDF4, an older version of the HDF format. Indefinitely guaranteeing the ability to read these data with multi-platform libraries in many languages is very difficult. As an alternative, HDF and NASA worked together to create maps of the files that contain metadata and information about data types, locations, and sizes of data objects in the files. These maps are written in XML and have successfully been used to access and understand data in HDF4 files without the HDF libraries. While originally developed to support sustainable access to these data, these maps can also be used to provide access to HDF4 metadata, facilitate user understanding of files prior to download, and validate the files for compliance with particular conventions. These capabilities are now available as a service for HDF4 archives and users.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
...). Those NITUs permitted railbanking/interim trail use negotiations under the Trails Act, 16 U.S.C. 1247(d... November 19, 2010. ADDRESSES: Comments may be submitted either via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the...
FastStats: Obstetrical Procedures
... Publications and Information Products Surveys and Data Collection Systems Washington Group on Disability Statistics Where to Write for Vital Records File Formats Help: How do I view different file ...
... Publications and Information Products Surveys and Data Collection Systems Washington Group on Disability Statistics Where to Write for Vital Records File Formats Help: How do I view different file ...
HepML, an XML-based format for describing simulated data in high energy physics
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.
2010-10-01
In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high energy physics is divided into several stages. Various programs exist for these stages. In this article we are interested in interfacing different Monte Carlo event generators via data files, in particular, Matrix Element (ME) generators and Showering and Hadronization (SH) generators. There is a widely accepted format for data files for such interfaces - Les Houches Event Format (LHEF). Although information kept in an LHEF file is enough for proper working of SH generators, it is insufficient for understanding how events in the LHEF file have been prepared and which physical model has been applied. In this paper we propose an extension of the format for keeping additional information available in generators. We propose to add a new information block, marked up with XML tags, to the LHEF file. This block describes events in the file in more detail. In particular, it stores information about a physical model, kinematical cuts, generator, etc. This helps to make LHEF files self-documented. Certainly, HepML can be applied in more general context, not in LHEF files only. Solution method: In order to overcome drawbacks of the original LHEF accord we propose to add a new information block of HepML tags. HepML is an XML-based markup language. We designed several XML Schemas for all tags in the language. Any HepML document should follow rules of the Schemas. The language is equipped with a library for operation with HepML tags and documents. This C++ library, called libhepml, consists of classes for HepML objects, which represent a HepML document in computer memory, parsing classes, serializating classes, and some auxiliary classes. Restrictions: The software is adapted for solving problems, described in the article. There are no additional restrictions. Running time: Tests have been done on a computer with Intel(R) Core(TM)2 Solo, 1.4 GHz. Parsing of a HepML file: 6 ms (size of the HepML files is 12.5 Kb) Writing of a HepML block to file: 14 ms (file size 12.5 Kb) Merging of two HepML blocks and writing to file: 18 ms (file size - 25.0 Kb).
The Joy of Playing with Oceanographic Data
NASA Astrophysics Data System (ADS)
Smith, A. T.; Xing, Z.; Armstrong, E. M.; Thompson, C. K.; Huang, T.
2013-12-01
The web is no longer just an after thought. It is no longer just a presentation layer filled with HTML, CSS, JavaScript, Frameworks, 3D, and more. It has become the medium of our communication. It is the database of all databases. It is the computing platform of all platforms. It has transformed the way we do science. Web service is the de facto method for communication between machines over the web. Representational State Transfer (REST) has standardized the way we architect services and their interfaces. In the Earth Science domain, we are familiar with tools and services such as Open-Source Project for Network Data Access Protocol (OPeNDAP), Thematic Realtime Environmental Distributed Data Services (THREDDS), and Live Access Server (LAS). We are also familiar with various data formats such as NetCDF3/4, HDF4/5, GRIB, TIFF, etc. One of the challenges for the Earth Science community is accessing information within these data. There are community-accepted readers that our users can download and install. However, the Application Programming Interface (API) between these readers is not standardized, which leads to non-portable applications. Webification (w10n) is an emerging technology, developed at the Jet Propulsion Laboratory, which exploits the hierarchical nature of a science data artifact to assign a URL to each element within the artifact. (e.g. a granule file). By embracing standards such as JSON, XML, and HTML5 and predictable URL, w10n provides a simple interface that enables tool-builders and researchers to develop portable tools/applications to interact with artifacts of various formats. The NASA Physical Oceanographic Distributed Active Archive Center (PO.DAAC) is the designated data center for observational products relevant to the physical state of the ocean. Over the past year PO.DAAC has been evaluating w10n technology by webifying its archive holdings to provide simplified access to oceanographic science artifacts and as a service to enable future tools and services development. In this talk, we will focus on a w10n-based system called Distributed Oceanographic Webification Service (DOWS) being developed at PO.DAAC to provide a newer and simpler method for working with observational data artifacts. As a continued effort at PO.DAAC to provide better tools and services to visualize our data, the talk will discuss the latest in web-based data visualization tools/frameworks (such as d3.js, Three.js, Leaflet.js, and more) and techniques for working with webified oceanographic science data in both a 2D and 3D web approach.
BOREAS Forest Cover Data Layers over the SSA-MSA in Raster Format
NASA Technical Reports Server (NTRS)
Nickeson, Jaime; Gruszka, F; Hall, F.
2000-01-01
This data set, originally provided as vector polygons with attributes, has been processed by BORIS staff to provide raster files that can be used for modeling or for comparison purposes. The original data were received as ARC/INFO coverages or as export files from SERM. The data include information on forest parameters for the BOREAS SSA-MSA. Most of the data used for this product were acquired by BORIS in 1993; the maps were produced from aerial photography taken as recently as 1988. The data are stored in binary, image format files.
Development of Software to Model AXAF-I Image Quality
NASA Technical Reports Server (NTRS)
Geary, Joseph; Hawkins, Lamar; Ahmad, Anees; Gong, Qian
1997-01-01
This report describes work conducted on Delivery Order 181 between October 1996 through June 1997. During this period software was written to: compute axial PSD's from RDOS AXAF-I mirror surface maps; plot axial surface errors and compute PSD's from HDOS "Big 8" axial scans; plot PSD's from FITS format PSD files; plot band-limited RMS vs axial and azimuthal position for multiple PSD files; combine and organize PSD's from multiple mirror surface measurements formatted as input to GRAZTRACE; modify GRAZTRACE to read FITS formatted PSD files; evaluate AXAF-I test results; improve and expand the capabilities of the GT x-ray mirror analysis package. During this period work began on a more user-friendly manual for the GT program, and improvements were made to the on-line help manual.
UNICON: A Powerful and Easy-to-Use Compound Library Converter.
Sommer, Kai; Friedrich, Nils-Ole; Bietz, Stefan; Hilbig, Matthias; Inhester, Therese; Rarey, Matthias
2016-06-27
The accurate handling of different chemical file formats and the consistent conversion between them play important roles for calculations in complex cheminformatics workflows. Working with different cheminformatic tools often makes the conversion between file formats a mandatory step. Such a conversion might become a difficult task in cases where the information content substantially differs. This paper describes UNICON, an easy-to-use software tool for this task. The functionality of UNICON ranges from file conversion between standard formats SDF, MOL2, SMILES, PDB, and PDBx/mmCIF via the generation of 2D structure coordinates and 3D structures to the enumeration of tautomeric forms, protonation states, and conformer ensembles. For this purpose, UNICON bundles the key elements of the previously described NAOMI library in a single, easy-to-use command line tool.
Java Library for Input and Output of Image Data and Metadata
NASA Technical Reports Server (NTRS)
Deen, Robert; Levoe, Steven
2003-01-01
A Java-language library supports input and output (I/O) of image data and metadata (label data) in the format of the Video Image Communication and Retrieval (VICAR) image-processing software and in several similar formats, including a subset of the Planetary Data System (PDS) image file format. The library does the following: It provides low-level, direct access layer, enabling an application subprogram to read and write specific image files, lines, or pixels, and manipulate metadata directly. Two coding/decoding subprograms ("codecs" for short) based on the Java Advanced Imaging (JAI) software provide access to VICAR and PDS images in a file-format-independent manner. The VICAR and PDS codecs enable any program that conforms to the specification of the JAI codec to use VICAR or PDS images automatically, without specific knowledge of the VICAR or PDS format. The library also includes Image I/O plugin subprograms for VICAR and PDS formats. Application programs that conform to the Image I/O specification of Java version 1.4 can utilize any image format for which such a plug-in subprogram exists, without specific knowledge of the format itself. Like the aforementioned codecs, the VICAR and PDS Image I/O plug-in subprograms support reading and writing of metadata.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lasche, George P.
2009-10-01
Cambio is an application intended to automatically read and display any spectrum file of any format in the world that the nuclear emergency response community might encounter. Cambio also provides an analysis capability suitable for HPGe spectra when detector response and scattering environment are not well known. Why is Cambio needed: (1) Cambio solves the following problem - With over 50 types of formats from instruments used in the field and new format variations appearing frequently, it is impractical for every responder to have current versions of the manufacturer's software from every instrument used in the field; (2) Cambio convertsmore » field spectra to any one of several common formats that are used for analysis, saving valuable time in an emergency situation; (3) Cambio provides basic tools for comparing spectra, calibrating spectra, and isotope identification with analysis suited especially for HPGe spectra; and (4) Cambio has a batch processing capability to automatically translate a large number of archival spectral files of any format to one of several common formats, such as the IAEA SPE or the DHS N42. Currently over 540 analysts and members of the nuclear emergency response community worldwide are on the distribution list for updates to Cambio. Cambio users come from all levels of government, university, and commercial partners around the world that support efforts to counter terrorist nuclear activities. Cambio is Unclassified Unlimited Release (UUR) and distributed by internet downloads with email notifications whenever a new build of Cambio provides for new formats, bug fixes, or new or improved capabilities. Cambio is also provided as a DLL to the Karlsruhe Institute for Transuranium Elements so that Cambio's automatic file-reading capability can be included at the Nucleonica web site.« less
75 FR 41093 - FM Table of Allotments, Maupin, Oregon
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-15
.... SUMMARY: The Audio Division grants the Petition for Reconsideration filed on behalf of Maupin Broadcasting... materials in accessible formats for people with disabilities (Braille, large print, electronic files, audio.... John A. Karousos, Assistant Chief, Audio Division, Media Bureau. [FR Doc. 2010-17226 Filed 7-14-10; 8...
A SARA Timeseries Utility supports analysis and management of time-varying environmental data including listing, graphing, computing statistics, computing meteorological data and saving in a WDM or text file. File formats supported include WDM, HSPF Binary (.hbn), USGS RDB, and T...
IDG - INTERACTIVE DIF GENERATOR
NASA Technical Reports Server (NTRS)
Preheim, L. E.
1994-01-01
The Interactive DIF Generator (IDG) utility is a tool used to generate and manipulate Directory Interchange Format files (DIF). Its purpose as a specialized text editor is to create and update DIF files which can be sent to NASA's Master Directory, also referred to as the International Global Change Directory at Goddard. Many government and university data systems use the Master Directory to advertise the availability of research data. The IDG interface consists of a set of four windows: (1) the IDG main window; (2) a text editing window; (3) a text formatting and validation window; and (4) a file viewing window. The IDG main window starts up the other windows and contains a list of valid keywords. The keywords are loaded from a user-designated file and selected keywords can be copied into any active editing window. Once activated, the editing window designates the file to be edited. Upon switching from the editing window to the formatting and validation window, the user has options for making simple changes to one or more files such as inserting tabs, aligning fields, and indenting groups. The viewing window is a scrollable read-only window that allows fast viewing of any text file. IDG is an interactive tool and requires a mouse or a trackball to operate. IDG uses the X Window System to build and manage its interactive forms, and also uses the Motif widget set and runs under Sun UNIX. IDG is written in C-language for Sun computers running SunOS. This package requires the X Window System, Version 11 Revision 4, with OSF/Motif 1.1. IDG requires 1.8Mb of hard disk space. The standard distribution medium for IDG is a .25 inch streaming magnetic tape cartridge in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format. The program was developed in 1991 and is a copyrighted work with all copyright vested in NASA. SunOS is a trademark of Sun Microsystems, Inc. X Window System is a trademark of Massachusetts Institute of Technology. OSF/Motif is a trademark of the Open Software Foundation, Inc. UNIX is a trademark of Bell Laboratories.
Digital soils survey map of the Patagonia Mountains, Arizona
Norman, Laura; Wissler, Craig; Guertin, D. Phillip; Gray, Floyd
2002-01-01
The ‘Soil Survey of Santa Cruz and Parts of Cochise and Pima Counties, Arizona,' a product of the USDA’s Soil Conservation Service and the Forest Service in cooperation with the Arizona Agricultural Experiment Station, released in 1979, was created according to the site conditions in 1971, when soil scientists identified soils types on aerial photographs. The scale at which these maps were published is 1:20,000. These soil maps were automated for incorporation into the hydrologic modeling within a GIS. The aerial photos onto which the soils units were drawn had not been orthoganalized, and contained distortion. A total of 15 maps composed the study area. These maps were scanned into TIFF format using an 8-bit black and white drum scanner at 100 dpi. The images were imported into ERDAS IMAGINE and the white borders were removed through subset decollaring processes. Five CD-ROM’s containing Digital Orthophoto Quarter Quads (DOQQ’s) were used to register and rectify the scanned soils maps. Polygonal data was then attributed according to the datasets.
75 FR 35700 - Revisions to Forms, Statements, and Reporting Requirements for Natural Gas Pipelines
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... filed in native applications or print-to-PDF format and not in a scanned format. Mail/Hand Delivery... also propose to revise page 520 accordingly. \\1\\ American Gas Association v. FERC, 593 F.3d 14 (D.C....\\14\\ \\14\\ 593 F.3d at 21. 8. Following the court's remand, AGA filed a motion requesting that the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, William
2015-10-19
Cambio opens data files from common gamma radiation detectors, displays a visual representation of it, and allows the user to edit the meta-data, as well as convert the data to a different file format.
Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California
Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.
2006-01-01
The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.
Rosetta: Ensuring the Preservation and Usability of ASCII-based Data into the Future
NASA Astrophysics Data System (ADS)
Ramamurthy, M. K.; Arms, S. C.
2015-12-01
Field data obtained from dataloggers often take the form of comma separated value (CSV) ASCII text files. While ASCII based data formats have positive aspects, such as the ease of accessing the data from disk and the wide variety of tools available for data analysis, there are some drawbacks, especially when viewing the situation through the lens of data interoperability and stewardship. The Unidata data translation tool, Rosetta, is a web-based service that provides an easy, wizard-based interface for data collectors to transform their datalogger generated ASCII output into Climate and Forecast (CF) compliant netCDF files following the CF-1.6 discrete sampling geometries. These files are complete with metadata describing what data are contained in the file, the instruments used to collect the data, and other critical information that otherwise may be lost in one of many README files. The choice of the machine readable netCDF data format and data model, coupled with the CF conventions, ensures long-term preservation and interoperability, and that future users will have enough information to responsibly use the data. However, with the understanding that the observational community appreciates the ease of use of ASCII files, methods for transforming the netCDF back into a CSV or spreadsheet format are also built-in. One benefit of translating ASCII data into a machine readable format that follows open community-driven standards is that they are instantly able to take advantage of data services provided by the many open-source data server tools, such as the THREDDS Data Server (TDS). While Rosetta is currently a stand-alone service, this talk will also highlight efforts to couple Rosetta with the TDS, thus allowing self-publishing of thoroughly documented datasets by the data producers themselves.
Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard
2011-01-01
Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .
Genotype harmonizer: automatic strand alignment and format conversion for genotype data integration.
Deelen, Patrick; Bonder, Marc Jan; van der Velde, K Joeri; Westra, Harm-Jan; Winder, Erwin; Hendriksen, Dennis; Franke, Lude; Swertz, Morris A
2014-12-11
To gain statistical power or to allow fine mapping, researchers typically want to pool data before meta-analyses or genotype imputation. However, the necessary harmonization of genetic datasets is currently error-prone because of many different file formats and lack of clarity about which genomic strand is used as reference. Genotype Harmonizer (GH) is a command-line tool to harmonize genetic datasets by automatically solving issues concerning genomic strand and file format. GH solves the unknown strand issue by aligning ambiguous A/T and G/C SNPs to a specified reference, using linkage disequilibrium patterns without prior knowledge of the used strands. GH supports many common GWAS/NGS genotype formats including PLINK, binary PLINK, VCF, SHAPEIT2 & Oxford GEN. GH is implemented in Java and a large part of the functionality can also be used as Java 'Genotype-IO' API. All software is open source under license LGPLv3 and available from http://www.molgenis.org/systemsgenetics. GH can be used to harmonize genetic datasets across different file formats and can be easily integrated as a step in routine meta-analysis and imputation pipelines.
Using component technologies for web based wavelet enhanced mammographic image visualization.
Sakellaropoulos, P; Costaridou, L; Panayiotakis, G
2000-01-01
The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.
BOREAS Elevation Contours over the NSA and SSA in ARC/INFO Generate Format
NASA Technical Reports Server (NTRS)
Knapp, David; Nickeson, Jaime; Hall, Forrest G. (Editor)
2000-01-01
This data set was prepared by BORIS Staff by reformatting the original data into the ARC/INFO Generate format. The original data were received in SIF at a scale of 1:50,000. BORIS staff could not find a format document or commercial software for reading SIF; the BOREAS HYD-08 team pro-vided some C source code that could read some of the SIF files. The data cover the BOREAS NSA and SSA. The original data were compiled from information available in the 1970s and 1980s. The data are available in ARC/INFO Generate format files.
NASA Technical Reports Server (NTRS)
Guenther, Bruce W.; Godden, Gerald D.; Xiong, Xiao-Xiong; Knight, Edward J.; Qiu, Shi-Yue; Montgomery, Harry; Hopkins, M. M.; Khayat, Mohammad G.; Hao, Zhi-Dong; Smith, David E. (Technical Monitor)
2000-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) radiometric calibration product is described for the thermal emissive and the reflective solar bands. Specific sensor design characteristics are identified to assist in understanding how the calibration algorithm software product is designed. The reflected solar band software products of radiance and reflectance factor both are described. The product file format is summarized and the MODIS Characterization Support Team (MCST) Homepage location for the current file format is provided.
VizieR Online Data Catalog: Sgr B2(N) and Sgr B2(M) IRAM 30m line survey (Belloche+, 2013)
NASA Astrophysics Data System (ADS)
Belloche, A.; Mueller, H. S. P.; Menten, K. M.; Schilke, P.; Comito, C.
2013-08-01
The list of line identifications corresponding to the blue labels in Figs. 2 to 7 where the labels are often too crowded to be easily readable are available in ASCII format. The lists are split into six files, three for Sgr B2(N) and three for Sgr B2(M). For each source, there is one file per atmospheric window (3, 2, and 1mm). Each file is ordered by increasing frequency. The observed and synthetic spectra of Sgr B2(N) and Sgr B2(M) between 80 and 116GHz are available both in ASCII and FITS formats. The synthetic spectra were resampled to the same frequency channels as the observed spectra. The blanking value is -1000K for the ASCII files. There is one ASCII file per source. There are two FITS files per source, one for the observed spectrum and one for the synthetic spectrum. The intensities are in main-beam temperature scale in K. The blanking value is 42.75234K for the observed spectrum of SgrB2(N) and 53.96533K for the observed spectrum of SgrB2(M). (9 data files).
OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Greiner, Annette; Cholia, Shreyas
Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less
Use of Schema on Read in Earth Science Data Archives
NASA Technical Reports Server (NTRS)
Hegde, Mahabaleshwara; Smit, Christine; Pilone, Paul; Petrenko, Maksym; Pham, Long
2017-01-01
Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the schema-on-read principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the schema-on-read approach allows customization of indexing spatially or temporally to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with This presentation will discuss formats used for data storage, frameworks with support for schema-on-read used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.
A malware detection scheme based on mining format information.
Bai, Jinrong; Wang, Junfeng; Zou, Guozhong
2014-01-01
Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates.
A Malware Detection Scheme Based on Mining Format Information
Bai, Jinrong; Wang, Junfeng; Zou, Guozhong
2014-01-01
Malware has become one of the most serious threats to computer information system and the current malware detection technology still has very significant limitations. In this paper, we proposed a malware detection approach by mining format information of PE (portable executable) files. Based on in-depth analysis of the static format information of the PE files, we extracted 197 features from format information of PE files and applied feature selection methods to reduce the dimensionality of the features and achieve acceptable high performance. When the selected features were trained using classification algorithms, the results of our experiments indicate that the accuracy of the top classification algorithm is 99.1% and the value of the AUC is 0.998. We designed three experiments to evaluate the performance of our detection scheme and the ability of detecting unknown and new malware. Although the experimental results of identifying new malware are not perfect, our method is still able to identify 97.6% of new malware with 1.3% false positive rates. PMID:24991639
Flores, Romeo M.; Spear, Brianne D.; Purchase, Peter A.; Gallagher, Craig M.
2010-01-01
Described in this report is an updated subsurface stratigraphic framework of the Paleocene Fort Union Formation and Eocene Wasatch Formation in the Powder River Basin (PRB) in Wyoming and Montana. This framework is graphically presented in 17 intersecting west-east and north-south cross sections across the basin. Also included are: (1) the dataset and all associated digital files and (2) digital files for all figures and table 1 suitable for large-format printing. The purpose of this U.S. Geological Survey (USGS) Open-File Report is to provide rapid dissemination and accessibility of the stratigraphic cross sections and related digital data to USGS customers, especially the U.S. Bureau of Land Management (BLM), to facilitate their modeling of the hydrostratigraphy of the PRB. This report contains a brief summary of the coal-bed correlations and database, and is part of a larger ongoing study that will be available in the near future.
Development of an e-VLBI Data Transport Software Suite with VDIF
NASA Technical Reports Server (NTRS)
Sekido, Mamoru; Takefuji, Kazuhiro; Kimura, Moritaka; Hobiger, Thomas; Kokado, Kensuke; Nozawa, Kentarou; Kurihara, Shinobu; Shinno, Takuya; Takahashi, Fujinobu
2010-01-01
We have developed a software library (KVTP-lib) for VLBI data transmission over the network with the VDIF (VLBI Data Interchange Format), which is the newly proposed standard VLBI data format designed for electronic data transfer over the network. The software package keeps the application layer (VDIF frame) and the transmission layer separate, so that each layer can be developed efficiently. The real-time VLBI data transmission tool sudp-send is an application tool based on the KVTP-lib library. sudp-send captures the VLBI data stream from the VSI-H interface with the K5/VSI PC-board and writes the data to file in standard Linux file format or transmits it to the network using the simple- UDP (SUDP) protocol. Another tool, sudp-recv , receives the data stream from the network and writes the data to file in a specific VLBI format (K5/VSSP, VDIF, or Mark 5B). This software system has been implemented on the Wettzell Tsukuba baseline; evaluation before operational employment is under way.
75 FR 19339 - FM Table of Allotments, Amboy, California
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-14
.... SUMMARY: The Audio Division seeks comments on a petition filed by Sunnylands Broadcasting, LLC, proposing... disabilities (Braille, large print, electronic files, audio format), send an e-mail to [email protected] or call... Chief, Audio Division, Media Bureau. [FR Doc. 2010-8449 Filed 4-13-10; 8:45 am] BILLING CODE 6712-01-S ...
14 CFR 221.121 - How to prepare and file applications for Special Tariff Permission.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Special Tariff Permission To... notice shall conform to the requirements of § 221.212 if filed electronically. (b) Number of paper copies and place of filing. For paper format applications, the original and one copy of each such application...
Biological Investigations of Adaptive Networks: Neuronal Control of Conditioned Responses
1989-07-01
The program also controls A/D sampling of voltage trace from NMR transducer and disk files for NMR, neural spikes, and synchronization. * HSAD . Basic...format which ANALYZE (by John Desmond) can read. e FIG.HIRES Reads C-64 HSAD files and EVENT NMR files and generates oscilloscope-like figures showing
77 FR 6625 - Railroad Cost of Capital-2011
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-08
... railroads are due by May 9, 2012. ADDRESSES: Comments may be submitted either via the Board's e-filing system or in the traditional paper format. Any person using e-filing should comply with the instructions at the E-FILING link on the Board's Web site, at http://www.stb.dot.gov . Any person submitting a...
Students' Attitudes to and Usage of Academic Feedback Provided via Audio Files
ERIC Educational Resources Information Center
Merry, Stephen; Orsmond, Paul
2008-01-01
This study explores students' attitudes to the provision of formative feedback on academic work using audio files together with the ways in which students implement such feedback within their learning. Fifteen students received audio file feedback on written work and were subsequently interviewed regarding their utilisation of that feedback within…
NASA IMAGESEER: NASA IMAGEs for Science, Education, Experimentation and Research
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline; Grubb, Thomas G.; Milner, Barbara C.
2012-01-01
A number of web-accessible databases, including medical, military or other image data, offer universities and other users the ability to teach or research new Image Processing techniques on relevant and well-documented data. However, NASA images have traditionally been difficult for researchers to find, are often only available in hard-to-use formats, and do not always provide sufficient context and background for a non-NASA Scientist user to understand their content. The new IMAGESEER (IMAGEs for Science, Education, Experimentation and Research) database seeks to address these issues. Through a graphically-rich web site for browsing and downloading all of the selected datasets, benchmarks, and tutorials, IMAGESEER provides a widely accessible database of NASA-centric, easy to read, image data for teaching or validating new Image Processing algorithms. As such, IMAGESEER fosters collaboration between NASA and research organizations while simultaneously encouraging development of new and enhanced Image Processing algorithms. The first prototype includes a representative sampling of NASA multispectral and hyperspectral images from several Earth Science instruments, along with a few small tutorials. Image processing techniques are currently represented with cloud detection, image registration, and map cover/classification. For each technique, corresponding data are selected from four different geographic regions, i.e., mountains, urban, water coastal, and agriculture areas. Satellite images have been collected from several instruments - Landsat-5 and -7 Thematic Mappers, Earth Observing-1 (EO-1) Advanced Land Imager (ALI) and Hyperion, and the Moderate Resolution Imaging Spectroradiometer (MODIS). After geo-registration, these images are available in simple common formats such as GeoTIFF and raw formats, along with associated benchmark data.
Kabekkodu, Soorya N; Faber, John; Fawcett, Tim
2002-06-01
The International Centre for Diffraction Data (ICDD) is responding to the changing needs in powder diffraction and materials analysis by developing the Powder Diffraction File (PDF) in a very flexible relational database (RDB) format. The PDF now contains 136,895 powder diffraction patterns. In this paper, an attempt is made to give an overview of the PDF-4, search/match methods and the advantages of having the PDF-4 in RDB format. Some case studies have been carried out to search for crystallization trends, properties, frequencies of space groups and prototype structures. These studies give a good understanding of the basic structural aspects of classes of compounds present in the database. The present paper also reports data-mining techniques and demonstrates the power of a relational database over the traditional (flat-file) database structures.
Is HDF5 a Good Format to Replace UVFITS?
NASA Astrophysics Data System (ADS)
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-09-01
The FITS (Flexible Image Transport System) data format was developed in the late 1970s for storage and exchange of astronomy-related image data. Since then, it has become a standard file format not only for images, but also for radio interferometer data (e.g. UVFITS, FITS-IDI). But is FITS the right format for next-generation telescopes to adopt? The newer Hierarchical Data Format (HDF5) file format offers considerable advantages over FITS, but has yet to gain widespread adoption within the radio astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages. Here, we present a comparison of FITS, HDF5, and the MeasurementSet (MS) format for storage of interferometric data. In addition, we present a tool for converting between formats. We show that the underlying data model of FITS can be ported to HDF5, a first step toward achieving wider HDF5 support.
BOREAS TE-20 Soils Data Over the NSA-MSA and Tower Sites in Raster Format
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Veldhuis, Hugo; Knapp, David; Veldhuis, Hugo
2000-01-01
The BOREAS TE-20 team collected several data sets for use in developing and testing models of forest ecosystem dynamics. This data set was gridded from vector layers of soil maps that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. The vector layers were gridded into raster files that cover the NSA-MSA and tower sites. The data are stored in binary, image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).
Covariance Data File Formats for Whisper-1.0 & Whisper-1.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan
2017-01-09
Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014. During 2015-2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This report describes the file formats used for the covariance data in both Whisper-1.0 and Whisper-1.1.
Tool for Merging Proposals Into DSN Schedules
NASA Technical Reports Server (NTRS)
Khanampornpan, Teerapat; Kwok, John; Call, Jared
2008-01-01
A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-29
... submissions by the parties may be submitted via the Board's e-filing format or in the traditional paper format. Any person using e-filing should attach a document and otherwise comply with the instructions at the E... proceeding under 49 U.S.C. 721 and 5 U.S.C. 554(e). Petitioners request that the Board declare that specific...
Occupational Survey Report. Visual Information, AFSC 3V0X1
2000-04-01
of the career ladder include: Scan artwork using flatbed scanners Convert graphic file formats Design layouts Letter certificates using laser...Design layouts Scan artwork using flatbed scanners Produce artwork using mouse or digitizing tablets Design and produce imagery for web pages Produce...DAFSC 3V031 PERSONNEL TASKS A0034 Scan artwork using flatbed scanners C0065 Design layouts A0004 Convert graphic file formats A0006 Create
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Da; Zhang, Qibin; Gao, Xiaoli
2014-04-30
We have developed a tool for automated, high-throughput analysis of LC-MS/MS data files, which greatly simplifies LC-MS based lipidomics analysis. Our results showed that LipidMiner is accurate and comprehensive in identification and quantification of lipid molecular species. In addition, the workflow implemented in LipidMiner is not limited to identification and quantification of lipids. If a suitable metabolite library is implemented in the library matching module, LipidMiner could be reconfigured as a tool for general metabolomics data analysis. It is of note that LipidMiner currently is limited to singly charged ions, although it is adequate for the purpose of lipidomics sincemore » lipids are rarely multiply charged,[14] even for the polyphosphoinositides. LipidMiner also only processes file formats generated from mass spectrometers from Thermo, i.e. the .RAW format. In the future, we are planning to accommodate file formats generated by mass spectrometers from other predominant instrument vendors to make this tool more universal.« less
Transforming Dermatologic Imaging for the Digital Era: Metadata and Standards.
Caffery, Liam J; Clunie, David; Curiel-Lewandrowski, Clara; Malvehy, Josep; Soyer, H Peter; Halpern, Allan C
2018-01-17
Imaging is increasingly being used in dermatology for documentation, diagnosis, and management of cutaneous disease. The lack of standards for dermatologic imaging is an impediment to clinical uptake. Standardization can occur in image acquisition, terminology, interoperability, and metadata. This paper presents the International Skin Imaging Collaboration position on standardization of metadata for dermatologic imaging. Metadata is essential to ensure that dermatologic images are properly managed and interpreted. There are two standards-based approaches to recording and storing metadata in dermatologic imaging. The first uses standard consumer image file formats, and the second is the file format and metadata model developed for the Digital Imaging and Communication in Medicine (DICOM) standard. DICOM would appear to provide an advantage over using consumer image file formats for metadata as it includes all the patient, study, and technical metadata necessary to use images clinically. Whereas, consumer image file formats only include technical metadata and need to be used in conjunction with another actor-for example, an electronic medical record-to supply the patient and study metadata. The use of DICOM may have some ancillary benefits in dermatologic imaging including leveraging DICOM network and workflow services, interoperability of images and metadata, leveraging existing enterprise imaging infrastructure, greater patient safety, and better compliance to legislative requirements for image retention.
Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C
2013-05-01
During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.
NASA Astrophysics Data System (ADS)
Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.
2013-05-01
During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.
ArrayInitiative - a tool that simplifies creating custom Affymetrix CDFs
2011-01-01
Background Probes on a microarray represent a frozen view of a genome and are quickly outdated when new sequencing studies extend our knowledge, resulting in significant measurement error when analyzing any microarray experiment. There are several bioinformatics approaches to improve probe assignments, but without in-house programming expertise, standardizing these custom array specifications as a usable file (e.g. as Affymetrix CDFs) is difficult, owing mostly to the complexity of the specification file format. However, without correctly standardized files there is a significant barrier for testing competing analysis approaches since this file is one of the required inputs for many commonly used algorithms. The need to test combinations of probe assignments and analysis algorithms led us to develop ArrayInitiative, a tool for creating and managing custom array specifications. Results ArrayInitiative is a standalone, cross-platform, rich client desktop application for creating correctly formatted, custom versions of manufacturer-provided (default) array specifications, requiring only minimal knowledge of the array specification rules and file formats. Users can import default array specifications, import probe sequences for a default array specification, design and import a custom array specification, export any array specification to multiple output formats, export the probe sequences for any array specification and browse high-level information about the microarray, such as version and number of probes. The initial release of ArrayInitiative supports the Affymetrix 3' IVT expression arrays we currently analyze, but as an open source application, we hope that others will contribute modules for other platforms. Conclusions ArrayInitiative allows researchers to create new array specifications, in a standard format, based upon their own requirements. This makes it easier to test competing design and analysis strategies that depend on probe definitions. Since the custom array specifications are easily exported to the manufacturer's standard format, researchers can analyze these customized microarray experiments using established software tools, such as those available in Bioconductor. PMID:21548938
Toolsets for Airborne Data (TAD): Improving Machine Readability for ICARTT Data Files
NASA Astrophysics Data System (ADS)
Northup, E. A.; Early, A. B.; Beach, A. L., III; Kusterer, J.; Quam, B.; Wang, D.; Chen, G.
2015-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. The ASDC Toolsets for Airborne Data (TAD) is designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. TAD makes use of aircraft data stored in the International Consortium for Atmospheric Research on Transport and Transformation (ICARTT) file format. ICARTT has been the NASA standard since 2010, and is widely used by NOAA, NSF, and international partners (DLR, FAAM). Its level of acceptance is due in part to it being generally self-describing for researchers, i.e., it provides necessary data descriptions for proper research use. Despite this, there are a number of issues with the current ICARTT format, especially concerning the machine readability. In order to overcome these issues, the TAD team has developed an "idealized" file format. This format is ASCII and is sufficiently machine readable to sustain the TAD system, however, it is not fully compatible with the current ICARTT format. The process of mapping ICARTT metadata to the idealized format, the format specifics, and the actual conversion process will be discussed. The goal of this presentation is to demonstrate an example of how to improve the machine readability of ASCII data format protocols.
SW New Mexico Oil Well Formation Tops
Shari Kelley
2015-10-21
Rock formation top picks from oil wells from southwestern New Mexico from scout cards and other sources. There are differing formation tops interpretations for some wells, so for those wells duplicate formation top data are presented in this file.
Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Torres, Aaron
2015-02-03
Techniques are provided for storing files in a parallel computing system using sub-files with semantically meaningful boundaries. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a plurality of sub-files. The method comprises the steps of obtaining a user specification of semantic information related to the file; providing the semantic information as a data structure description to a data formatting library write function; and storing the semantic information related to the file with one or more of the sub-files in one or more storage nodes of the parallel computing system. The semantic information provides a description of data in the file. The sub-files can be replicated based on semantically meaningful boundaries.
Standard interface files and procedures for reactor physics codes, version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, B.M.
Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)
75 FR 19338 - FM TABLE OF ALLOTMENTS, Milford, Utah
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-14
.... SUMMARY: The Audio Division seeks comments on a petition filed by Canyon Media Group, LLC, authorized..., large print, electronic files, audio format), send an e-mail to [email protected] or call the Consumer... Chief, Audio Division, Media Bureau. [FR Doc. 2010-8448 Filed 4-13-10; 8:45 am] BILLING CODE 6712-01-S ...
Snake River Plain Geothermal Play Fairway Analysis - Phase 1 KMZ files
John Shervais
2015-10-10
This dataset contain raw data files in kmz files (Google Earth georeference format). These files include volcanic vent locations and age, the distribution of fine-grained lacustrine sediments (which act as both a seal and an insulating layer for hydrothermal fluids), and post-Miocene faults compiled from the Idaho Geological Survey, the USGS Quaternary Fault database, and unpublished mapping. It also contains the Composite Common Risk Segment Map created during Phase 1 studies, as well as a file with locations of select deep wells used to interrogate the subsurface.
Cytoscape file of chemical networks
The maximum connectivity scores of pairwise chemical conditions summarized from Cmap results in a file with Cytoscape format (http://www.cytoscape.org/). The figures in the publication were generated from this file. The Cytoscape file is formed from importing the eight text file therein.This dataset is associated with the following publication:Wang , R., A. Biales , N. Garcia-Reyero, E. Perkins, D. Villeneuve, G. Ankley, and D. Bencic. Fish Connectivity Mapping: Linking Chemical Stressors by Their MOA-Driven Transcriptomic Profiles. BMC Genomics. BioMed Central Ltd, London, UK, 17(84): 1-20, (2016).
Index files for Belle II - very small skim containers
NASA Astrophysics Data System (ADS)
Sevior, Martin; Bloomfield, Tristan; Kuhr, Thomas; Ueda, I.; Miyake, H.; Hara, T.
2017-10-01
The Belle II experiment[1] employs the root file format[2] for recording data and is investigating the use of “index-files” to reduce the size of data skims. These files contain pointers to the location of interesting events within the total Belle II data set and reduce the size of data skims by 2 orders of magnitude. We implement this scheme on the Belle II grid by recording the parent file metadata and the event location within the parent file. While the scheme works, it is substantially slower than a normal sequential read of standard skim files using default root file parameters. We investigate the performance of the scheme by adjusting the “splitLevel” and “autoflushsize” parameters of the root files in the parent data files.
18 CFR 270.304 - Tight formation gas.
Code of Federal Regulations, 2011 CFR
2011-04-01
... determination that natural gas is tight formation gas must file with the jurisdictional agency an application... formation; (d) A complete copy of the well log, including the log heading identifying the designated tight...
Proposal for a Standard Format for Neurophysiology Data Recording and Exchange.
Stead, Matt; Halford, Jonathan J
2016-10-01
The lack of interoperability between information networks is a significant source of cost in health care. Standardized data formats decrease health care cost, improve quality of care, and facilitate biomedical research. There is no common standard digital format for storing clinical neurophysiologic data. This review proposes a new standard file format for neurophysiology data (the bulk of which is video-electroencephalographic data), entitled the Multiscale Electrophysiology Format, version 3 (MEF3), which is designed to address many of the shortcomings of existing formats. MEF3 provides functionality that addresses many of the limitations of current formats. The proposed improvements include (1) hierarchical file structure with improved organization; (2) greater extensibility for big data applications requiring a large number of channels, signal types, and parallel processing; (3) efficient and flexible lossy or lossless data compression; (4) industry standard multilayered data encryption and time obfuscation that permits sharing of human data without the need for deidentification procedures; (5) resistance to file corruption; (6) facilitation of online and offline review and analysis; and (7) provision of full open source documentation. At this time, there is no other neurophysiology format that supports all of these features. MEF3 is currently gaining industry and academic community support. The authors propose the use of the MEF3 as a standard format for neurophysiology recording and data exchange. Collaboration between industry, professional organizations, research communities, and independent standards organizations is needed to move the project forward.
HDFITS: Porting the FITS data model to HDF5
NASA Astrophysics Data System (ADS)
Price, D. C.; Barsdell, B. R.; Greenhill, L. J.
2015-09-01
The FITS (Flexible Image Transport System) data format has been the de facto data format for astronomy-related data products since its inception in the late 1970s. While the FITS file format is widely supported, it lacks many of the features of more modern data serialization, such as the Hierarchical Data Format (HDF5). The HDF5 file format offers considerable advantages over FITS, such as improved I/O speed and compression, but has yet to gain widespread adoption within astronomy. One of the major holdbacks is that HDF5 is not well supported by data reduction software packages and image viewers. Here, we present a comparison of FITS and HDF5 as a format for storage of astronomy datasets. We show that the underlying data model of FITS can be ported to HDF5 in a straightforward manner, and that by doing so the advantages of the HDF5 file format can be leveraged immediately. In addition, we present a software tool, fits2hdf, for converting between FITS and a new 'HDFITS' format, where data are stored in HDF5 in a FITS-like manner. We show that HDFITS allows faster reading of data (up to 100x of FITS in some use cases), and improved compression (higher compression ratios and higher throughput). Finally, we show that by only changing the import lines in Python-based FITS utilities, HDFITS formatted data can be presented transparently as an in-memory FITS equivalent.
What is meant by Format Version? Product Version? Collection?
Atmospheric Science Data Center
2017-10-12
The format Version is used to distinguish between software deliveries to ASDC that result in a product format change. The format version is given in the MISR data file name using the designator _Fnn_ where nn is the version number. ...
ListingAnalyst: A program for analyzing the main output file from MODFLOW
Winston, Richard B.; Paulinski, Scott
2014-01-01
ListingAnalyst is a Windows® program for viewing the main output file from MODFLOW-2005, MODFLOW-NWT, or MODFLOW-LGR. It organizes and displays large files quickly without using excessive memory. The sections and subsections of the file are displayed in a tree-view control, which allows the user to navigate quickly to desired locations in the files. ListingAnalyst gathers error and warning messages scattered throughout the main output file and displays them all together in an error and a warning tab. A grid view displays tables in a readable format and allows the user to copy the table into a spreadsheet. The user can also search the file for terms of interest.
Vector Topographic Map Data over the BOREAS NSA and SSA in SIF Format
NASA Technical Reports Server (NTRS)
Knapp, David; Nickeson, Jaime; Hall, Forrest G. (Editor)
2000-01-01
This data set contains vector contours and other features of individual topographic map sheets from the National Topographic Series (NTS). The map sheet files were received in Standard Interchange Format (SIF) and cover the BOReal Ecosystem-Atmosphere Study (BOREAS) Northern Study Area (NSA) and Southern Study Area (SSA) at scales of 1:50,000 and 1:250,000. The individual files are stored in compressed Unix tar archives.
Workflow opportunities using JPEG 2000
NASA Astrophysics Data System (ADS)
Foshee, Scott
2002-11-01
JPEG 2000 is a new image compression standard from ISO/IEC JTC1 SC29 WG1, the Joint Photographic Experts Group (JPEG) committee. Better thought of as a sibling to JPEG rather than descendant, the JPEG 2000 standard offers wavelet based compression as well as companion file formats and related standardized technology. This paper examines the JPEG 2000 standard for features in four specific areas-compression, file formats, client-server, and conformance/compliance that enable image workflows.
GIF Animation of Mode Shapes and Other Data on the Internet
NASA Technical Reports Server (NTRS)
Pappa, Richard S.
1998-01-01
The World Wide Web abounds with animated cartoons and advertisements competing for our attention. Most of these figures are animated Graphics Interchange Format (GIF) files. These files contain a series of ordinary GIF images plus control information, and they provide an exceptionally simple, effective way to animate on the Internet. To date, however, this format has rarely been used for technical data, although there is no inherent reason not to do so. This paper describes a procedure for creating high-resolution animated GIFs of mode shapes and other types of structural dynamics data with readily available software. The paper shows three example applications using recent modal test data and video footage of a high-speed sled run. A fairly detailed summary of the GIF file format is provided in the appendix. All of the animations discussed in the paper are posted on the Internet available through the following address: http://sdb-www.larc.nasa.gov/.
Preparing PNNL Reports with LaTeX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waichler, Scott R.
2005-06-01
LaTeX is a mature document preparation system that is the standard in many scientific and academic workplaces. It has been used extensively by scattered individuals and research groups within PNNL for years, but until now there have been no centralized or lab-focused resources to help authors and editors. PNNL authors and editors can produce correctly formatted PNNL or PNWD reports using the LaTeX document preparation system and the available template files. Please visit the PNNL-LaTeX Project (http://stidev.pnl.gov/resources/latex/, inside the PNNL firewall) for additional information and files. In LaTeX, document content is maintained separately from document structure for the most part.more » This means that the author can easily produce the same content in different formats and, more importantly, can focus on the content and write it in a plain text file that doesn't go awry, is easily transferable, and won't become obsolete due to software changes. LaTeX produces the finest print quality output; its typesetting is noticeably better than that of MS Word. This is particularly true for mathematics, tables, and other types of special text. Other benefits of LaTeX: easy handling of large numbers of figures and tables; automatic and error-free captioning, citation, cross-referencing, hyperlinking, and indexing; excellent published and online documentation; free or low-cost distributions for Windows/Linux/Unix/Mac OS X. This document serves two purposes: (1) it provides instructions to produce reports formatted to PNNL requirements using LaTeX, and (2) the document itself is in the form of a PNNL report, providing examples of many solved formatting challenges. Authors can use this document or its skeleton version (with formatting examples removed) as the starting point for their own reports. The pnnreport.cls class file and pnnl.bst bibliography style file contain the required formatting specifications for reports to the Department of Energy. Options are also provided for formatting PNWD (non-1830) reports. This documentation and the referenced files are meant to provide a complete package of PNNL particulars for authors and editors who wish to prepare technical reports using LaTeX. The example material in this document was borrowed from real reports and edited for demonstration purposes. The subject matter content of the example material is not relevant here and generally does not make literal sense in the context of this document. Brackets ''[]'' are used to denote large blocks of example text. The PDF file for this report contains hyperlinks to facilitate navigation. Hyperlinks are provided for all cross-referenced material, including section headings, figures, tables, and references. Not all hyperlinks are colored but will be obvious when you move your mouse over them.« less
Publications - PIR 2002-3 | Alaska Division of Geological & Geophysical
): Philip Smith Mountains Bibliographic Reference Stevens, D.S.P., 2014, Engineering-geologic map of the Digital Geospatial Data Philip Smith Mountains: Engineering-geologic map Data File Format File Size Info
NASA Astrophysics Data System (ADS)
Mosca, Pietro; Mounier, Claude
2016-03-01
The automatic construction of evolution chains recently implemented in GALILEE system is based on the analysis of several ENDF files : the multigroup production cross sections present in the GENDF files processed by NJOY from the ENDF evaluation, the decay file and the fission product yields (FPY) file. In this context, this paper highlights the importance of the nucleus identification to properly interconnect the data mentioned above. The first part of the paper describes the present status of the nucleus identification among the several ENDF files focusing, in particular, on the use of the excited state number and of the isomeric state number. The second part reviews the problems encountered during the automatic construction of the depletion chains using recent ENDF data. The processing of the JEFF-3.1.1, ENDF/B-VII.0 (decay and FPY) and the JEFF-3.2 (production cross section) points out problems about the compliance or not of the nucleus identifiers with the ENDF-6 format and sometimes the inconsistencies among the various ENDF files. In addition, the analysis of EAF-2003 and EAF-2010 shows some incoherence between the ZA product identifier and the reaction identifier MT for the reactions (n, pα) and (n, 2np). As a main result of this work, our suggestion is to change the ENDF format using systematically the isomeric state number to identify the nuclei. This proposal is already compliant to a huge amount ENDF data that are not in agreement with the present ENDF format. This choice is the most convenient because, ultimately, it allows one to give human readable names to the nuclei of the depletion chains.
BOREAS RSS-14 Level-1a GOES-8 Visible, IR and Water Vapor Images
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Newcomer, Jeffrey A.; Faysash, David; Cooper, Harry J.; Smith, Eric A.
2000-01-01
The BOREAS RSS-14 team collected and processed several GOES-7 and GOES-8 image data sets that covered the BOREAS study region. The level-1a GOES-8 images were created by BORIS personnel from the level-1 images delivered by FSU personnel. The data cover 14-Jul-1995 to 21-Sep-1995 and 12-Feb-1996 to 03-Oct-1996. The data start out as three bands with 8-bit pixel values and end up as five bands with 10-bit pixel values. No major problems with the data have been identified. The differences between the level-1 and level-1a GOES-8 data are the formatting and packaging of the data. The images missing from the temporal series of level-1 GOES-8 images were zero-filled by BORIS staff to create files consistent in size and format. In addition, BORIS staff packaged all the images of a given type from a given day into a single file, removed the header information from the individual level-1 files, and placed it into a single descriptive ASCII header file. The data are contained in binary image format files. Due to the large size of the images, the level-1a GOES-8 data are not contained on the BOREAS CD-ROM set. An inventory listing file is supplied on the CD-ROM to inform users of what data were collected. The level-1a GOES-8 image data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). See sections 15 and 16 for more information. The data files are available on a CD-ROM (see document number 20010000884).
Main image file tape description
Warriner, Howard W.
1980-01-01
This Main Image File Tape document defines the data content and file structure of the Main Image File Tape (MIFT) produced by the EROS Data Center (EDC). This document also defines an INQUIRY tape, which is just a subset of the MIFT. The format of the INQUIRY tape is identical to the MIFT except for two records; therefore, with the exception of these two records (described elsewhere in this document), every remark made about the MIFT is true for the INQUIRY tape.
Exposing Coverage Data to the Semantic Web within the MELODIES project: Challenges and Solutions
NASA Astrophysics Data System (ADS)
Riechert, Maik; Blower, Jon; Griffiths, Guy
2016-04-01
Coverage data, typically big in data volume, assigns values to a given set of spatiotemporal positions, together with metadata on how to interpret those values. Existing storage formats like netCDF, HDF and GeoTIFF all have various restrictions that prevent them from being preferred formats for use over the web, especially the semantic web. Factors that are relevant here are the processing complexity, the semantic richness of the metadata, and the ability to request partial information, such as a subset or just the appropriate metadata. Making coverage data available within web browsers opens the door to new ways for working with such data, including new types of visualization and on-the-fly processing. As part of the European project MELODIES (http://melodiesproject.eu) we look into the challenges of exposing such coverage data in an interoperable and web-friendly way, and propose solutions using a host of emerging technologies like JSON-LD, the DCAT and GeoDCAT-AP ontologies, the CoverageJSON format, and new approaches to REST APIs for coverage data. We developed the CoverageJSON format within the MELODIES project as an additional way to expose coverage data to the web, next to having simple rendered images available using standards like OGC's WMS. CoverageJSON partially incorporates JSON-LD but does not encode individual data values as semantic resources, making use of the technology in a practical manner. The development also focused on it being a potential output format for OGC WCS. We will demonstrate how existing netCDF data can be exposed as CoverageJSON resources on the web together with a REST API that allows users to explore the data and run operations such as spatiotemporal subsetting. We will show various use cases from the MELODIES project, including reclassification of a Land Cover dataset client-side within the browser with the ability for the user to influence the reclassification result by making use of the above technologies.
View From Within 'Perseverance Valley' on Mars (Enhanced Color)
2017-12-06
This enhanced-color view from within "Perseverance Valley," on the inner slope of the western rim of Endurance Crater on Mars, includes wheel tracks from the Opportunity rover's descent of the valley. The Panoramic Camera (Pancam) on Opportunity's mast took the component images of the scene during the period Sept. 4 through Oct. 6, 2017, corresponding to sols (Martian days) 4840 through 4871 of the rover's work on Mars. Perseverance Valley is a system of shallow troughs descending eastward about the length of two football fields from the crest of the crater rim to the floor of the crater. This panorama spans from northeast on the left to northwest on the right, including portions of the crater floor (eastward) in the left half and of the rim (westward) in the right half. Opportunity began descending Perseverance Valley in mid-2017 (see map) as part of an investigation into how the valley formed. Rover wheel tracks are darker brown, between two patches of bright bedrock, receding toward the horizon in the right half of the scene. This view combines multiple images taken through three different Pancam filters. The selected filters admit light centered on wavelengths of 753 nanometers (near-infrared), 535 nanometers (green) and 432 nanometers (violet). The three color bands are combined here with enhancement to make differences in surface materials easier to see. A map and full-resolution TIFF file are available at https://photojournal.jpl.nasa.gov/catalog/PIA22073
lcps: Light curve pre-selection
NASA Astrophysics Data System (ADS)
Schlecker, Martin
2018-05-01
lcps searches for transit-like features (i.e., dips) in photometric data. Its main purpose is to restrict large sets of light curves to a number of files that show interesting behavior, such as drops in flux. While lcps is adaptable to any format of time series, its I/O module is designed specifically for photometry of the Kepler spacecraft. It extracts the pre-conditioned PDCSAP data from light curves files created by the standard Kepler pipeline. It can also handle csv-formatted ascii files. lcps uses a sliding window technique to compare a section of flux time series with its surroundings. A dip is detected if the flux within the window is lower than a threshold fraction of the surrounding fluxes.
Chemopreventive Agent Development | Division of Cancer Prevention
[[{"fid":"174","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Chemoprevenentive Agent Development Research Group Homepage
17 CFR 16.06 - Errors or omissions.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., reporting markets shall file corrections to errors or omissions in data previously filed with the Commission pursuant to §§ 16.00 and 16.01 in the format and using the coding structure and electronic data submission...
Publications - PR 121 | Alaska Division of Geological & Geophysical Surveys
: Download below or please see our publication sales page for more information. Quadrangle(s): Philip Smith Philip Smith Mountains: Surficial Geology Data File Format File Size Info Download psm-surficial-geo
Publications - RI 2001-1C | Alaska Division of Geological & Geophysical
map of the Chulitna region, southcentral Alaska, scale 1:63,360 (7.5 M) Digital Geospatial Data Digital Geospatial Data Chulitna region surficial geology Data File Format File Size Info Download
Publications - RDF 2015-17 | Alaska Division of Geological & Geophysical
/10.14509/29519 Publication Products Report Report Information rdf2015_017.pdf (347.0 K) Digital Geospatial Data Digital Geospatial Data Tonsina geochemistry: DGGS samples Data File Format File Size Info
VizieR Online Data Catalog: Horizontal temperature at Venus upper atmosphere (Peralta+, 2016)
NASA Astrophysics Data System (ADS)
Peralta, J.; Lopez-Valverde, M. A.; Gilli, G.; Piccialli, A.
2015-11-01
The dayside atmospheric temperatures in the UMLT of Venus (displayed in Figure 7A of this article) are listed as a CSV data file. These values consist of averages in bins of 5° in latitude and 0.25-hours in local time from dayside temperatures covering five years of data (from 2006/05/14 to 2011/06/05). These temperatures were inferred from the CO2 NLTE nadir spectra measured by the instrument VIRTIS-H onboard Venus Express (see article for full description of the procedure), and are representative of the atmospheric region between 10-2 to 10-5mb. Along with the temperatures, we also provide the corresponding error and the number of temperatures averaged in each bin. The format of the CSV file reasonably agrees with the expected format of the data files to be provided in the future version of the Venus International Reference Atmosphere (VIRA). (1 data file).
Profex: a graphical user interface for the Rietveld refinement program BGMN.
Doebelin, Nicola; Kleeberg, Reinhard
2015-10-01
Profex is a graphical user interface for the Rietveld refinement program BGMN . Its interface focuses on preserving BGMN 's powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems.
Profex: a graphical user interface for the Rietveld refinement program BGMN
Doebelin, Nicola; Kleeberg, Reinhard
2015-01-01
Profex is a graphical user interface for the Rietveld refinement program BGMN. Its interface focuses on preserving BGMN’s powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems. PMID:26500466
Desktop document delivery using portable document format (PDF) files and the Web.
Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J
1998-01-01
Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL. PMID:9681165
Desktop document delivery using portable document format (PDF) files and the Web.
Shipman, J P; Gembala, W L; Reeder, J M; Zick, B A; Rainwater, M J
1998-07-01
Desktop access to electronic full-text literature was rated one of the most desirable services in a client survey conducted by the University of Washington Libraries. The University of Washington Health Sciences Libraries (UW HSL) conducted a ten-month pilot test from August 1996 to May 1997 to determine the feasibility of delivering electronic journal articles via the Internet to remote faculty. Articles were scanned into Adobe Acrobat Portable Document Format (PDF) files and delivered to individuals using Multipurpose Internet Mail Extensions (MIME) standard e-mail attachments and the Web. Participants retrieved scanned articles and used the Adobe Acrobat Reader software to view and print files. The pilot test required a special programming effort to automate the client notification and file deletion processes. Test participants were satisfied with the pilot test despite some technical difficulties. Desktop delivery is now offered as a routine delivery method from the UW HSL.
"AFacet": a geometry based format and visualizer to support SAR and multisensor signature generation
NASA Astrophysics Data System (ADS)
Rosencrantz, Stephen; Nehrbass, John; Zelnio, Ed; Sudkamp, Beth
2018-04-01
When simulating multisensor signature data (including SAR, LIDAR, EO, IR, etc...), geometry data are required that accurately represent the target. Most vehicular targets can, in real life, exist in many possible configurations. Examples of these configurations might include a rotated turret, an open door, a missing roof rack, or a seat made of metal or wood. Previously we have used the Modelman (.mmp) format and tool to represent and manipulate our articulable models. Unfortunately Modelman is now an unsupported tool and an undocumented binary format. Some work has been done to reverse engineer a reader in Matlab so that the format could continue to be useful. This work was tedious and resulted in an incomplete conversion. In addition, the resulting articulable models could not be altered and re-saved in the Modelman format. The AFacet (.afacet) articulable facet file format is a replacement for the binary Modelman (.mmp) file format. There is a one-time straight forward path for conversion from Modelman to the AFacet format. It is a simple ASCII, comma separated, self-documenting format that is easily readable (and in many cases usefully editable) by a human with any text editor, preventing future obsolescence. In addition, because the format is simple, it is relatively easy for even the most novice programmer to create a program to read and write AFacet files in any language without any special libraries. This paper presents the AFacet format, as well as a suite of tools for creating, articulating, manipulating, viewing, and converting the 370+ (when this paper was written) models that have been converted to the AFacet format.
Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard
2015-01-01
The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153
ArrayBridge: Interweaving declarative array processing with high-performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Haoyuan; Floratos, Sofoklis; Blanas, Spyros
Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aimsmore » to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.« less
User's guide to HYPOINVERSE-2000, a Fortran program to solve for earthquake locations and magnitudes
Klein, Fred W.
2002-01-01
Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of Hypoinverse includes all of the previous capabilities, but adds Y2000 formats to those defined earlier. In most cases, the new formats add 2 digits to the year field to accommodate the century. Other fields are sometimes rearranged or expanded to accommodate a better field order. The Y2000 formats are invoked with the “200” command. When the Y2000 flag is turned on, all files are read and written in the new format and there is no mixing of format types in a single run. Some formats without a date field, like station files, have not changed. A separate program called 2000CONV has been written to convert old formats to new. Other new features, like expanded station names, calculating amplitude magnitudes from a variety of digital seismometers, station history files, interactive earthquake processing, and locations from CUSP (Caltech USGS Seismic Processing) binary files have been added. General features. Hypoinverse will locate any number of events in an input file, which can be in one of several different formats. Any or all of printout, summary or archive output may be produced. Hypoinverse is driven by user commands. The various commands define input and output files, set adjustable parameters, and solve for locations of a file of earthquake data using the parameters and files currently set. It is both interactive and "batch" in that commands may be executed either from the keyboard or from a file. You execute the commands in a file by typing @filename at the Hypoinverse prompt. Users may either supply parameters on the command line, or omit them and are prompted interactively. The current parameter values are displayed and may be taken as defaults by pressing just the RETURN key after the prompt. This makes the program very easy to use, providing you can remember the names of the commands. Combining commands with and without their required parameters into a command file permits a variety of customized procedures such as automatic input of crustal model and station data, but prompting for a different phase file each time. All commands are 3 letters long and most require one or more parameters or file names. If they appear on a line with a command, character strings such as filenames must be enclosed in apostrophes (single quotes). Appendix 1 gives this and other free-format rules for supplying parameters, which are parsed in Fortran. When several parameters are required following a command, any of them may be omitted by replacing them with null fields (see appendix 1). A null field leaves that parameter unchanged from its current or default value. When you start HYPOINVERSE, default values are in effect for all parameters except file names. Hypoinverse is a complicated program with many features and options. Many of these "advanced" or seldom used features are documented here, but are more detailed than a typical user needs to read about when first starting with the program. I have put some of this material in smaller type so that a first time user can concentrate on the more important information.
2011-05-01
iTunes illustrate the difference between the centralized approach of digital library systems and the distributed approach of container file formats...metadata in a container file format. Apple’s iTunes uses a centralized metadata approach and allows users to maintain song metadata in a single...one iTunes library to another the metadata must be copied separately or reentered in the new library. This demonstrates the utility of storing metadata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D.
2014-03-31
In November 2012, the Working Party on Evaluation Cooperation Subgroup 38 (WPEC-SG38) began with the task of developing a nuclear data format and supporting infrastructure to replace the now nearly 50 year old ENDF format. The first step in this process is to develop requirements for the new format and infrastructure. In this talk, I will review the status of ENDF's Thermal Scattering Law (TSL) formats as well as support for this data in the GND format (from which the new format is expected to evolve). Finally, I hope to begin a dialog with members of the thermal neutron scatteringmore » community so that their data needs can be accurately and easily accommodated by the new format and tools, as captured by the requirements document. During this discussion, we must keep in mind that the new tools and format must; Support what is in existing data files; Support new things we want to put in data files; and Be flexible enough for us to adapt it to future unanticipated challenges.« less
Lina Ma,; Sherrod, David R.; Scott, William E.
2014-01-01
This geodatabase contains information derived from legacy mapping that was published in 1995 as U.S. Geological Survey Open-File Report 95-219. The main component of this publication is a geologic map database prepared using geographic information system (GIS) applications. Included are pdf files to view or print the map sheet, the accompanying pamphlet from Open-File Report 95-219, and links to the original publication, which is available as scanned files in pdf format.
cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.
Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro
2016-01-01
Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.
WhopGenome: high-speed access to whole-genome variation and sequence data in R.
Wittelsbürger, Ulrich; Pfeifer, Bastian; Lercher, Martin J
2015-02-01
The statistical programming language R has become a de facto standard for the analysis of many types of biological data, and is well suited for the rapid development of new algorithms. However, variant call data from population-scale resequencing projects are typically too large to be read and processed efficiently with R's built-in I/O capabilities. WhopGenome can efficiently read whole-genome variation data stored in the widely used variant call format (VCF) file format into several R data types. VCF files can be accessed either on local hard drives or on remote servers. WhopGenome can associate variants with annotations such as those available from the UCSC genome browser, and can accelerate the reading process by filtering loci according to user-defined criteria. WhopGenome can also read other Tabix-indexed files and create indices to allow fast selective access to FASTA-formatted sequence files. The WhopGenome R package is available on CRAN at http://cran.r-project.org/web/packages/WhopGenome/. A Bioconductor package has been submitted. lercher@cs.uni-duesseldorf.de. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette
2013-06-01
High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Barnes, David G.; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J.; Quayle, Michelle R.; McHenry, Colin R.
2013-01-01
With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2). PMID:24086243
Barnes, David G; Vidiassov, Michail; Ruthensteiner, Bernhard; Fluke, Christopher J; Quayle, Michelle R; McHenry, Colin R
2013-01-01
With the latest release of the S2PLOT graphics library, embedding interactive, 3-dimensional (3-d) scientific figures in Adobe Portable Document Format (PDF) files is simple, and can be accomplished without commercial software. In this paper, we motivate the need for embedding 3-d figures in scholarly articles. We explain how 3-d figures can be created using the S2PLOT graphics library, exported to Product Representation Compact (PRC) format, and included as fully interactive, 3-d figures in PDF files using the movie15 LaTeX package. We present new examples of 3-d PDF figures, explain how they have been made, validate them, and comment on their advantages over traditional, static 2-dimensional (2-d) figures. With the judicious use of 3-d rather than 2-d figures, scientists can now publish, share and archive more useful, flexible and faithful representations of their study outcomes. The article you are reading does not have embedded 3-d figures. The full paper, with embedded 3-d figures, is recommended and is available as a supplementary download from PLoS ONE (File S2).
Cannon, William F.; Woodruff, Laurel G.
2003-01-01
This data set consists of nine files of geochemical information on various types of surficial deposits in northwestern Wisconsin and immediately adjacent parts of Michigan and Minnesota. The files are presented in two formats: as dbase files in dbaseIV form and Microsoft Excel form. The data present multi-element chemical analyses of soils, stream sediments, and lake sediments. Latitude and longitude values are provided in each file so that the dbf files can be readily imported to GIS applications. Metadata files are provided in outline form, question and answer form and text form. The metadata includes information on procedures for sample collection, sample preparation, and chemical analyses including sensitivity and precision.
ProMC: Input-output data format for HEP applications using varint encoding
NASA Astrophysics Data System (ADS)
Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.
2014-10-01
A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.
ERIC Educational Resources Information Center
Falk, Howard
1998-01-01
Discussion of CD (compact disc) recorders describes recording applications, including storing large graphic files, creating audio CDs, and storing material downloaded from the Internet; backing up files; lifespan; CD recording formats; continuous recording; recording software; recorder media; vulnerability of CDs; basic computer requirements; and…
14 CFR 221.31 - Rules and regulations governing passenger fares and services.
Code of Federal Regulations, 2010 CFR
2010-01-01
... TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Manner of Filing Tariffs § 221.31 Rules and... (b) of this section may be filed in a paper format, subject to the requirements of this part and...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-20
... through DEMD's in-house databases; Well log interpretation, including correlation of formation tops.... Files must have descriptive file names to help DEMD quickly locate specific components of the proposal...
Prostate and Urologic Cancer | Division of Cancer Prevention
[[{"fid":"183","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage Logo","field_file_image_title_text[und][0][value]":"Prostate and Urologic Cancer Research Group Homepage
Publications - RDF 2007-1 | Alaska Division of Geological & Geophysical
://doi.org/10.14509/15759 Publication Products Report Report Information rdf2007_001.pdf (443.0 K) Digital Geospatial Data Digital Geospatial Data Fairbanks Mining District Geochemical Data Data File Format File Size
Publications - RDF 2011-4 v. 2 | Alaska Division of Geological &
://doi.org/10.14509/23002 Publication Products Report Report Information rdf2011_004.pdf (519.0 K) Digital Geospatial Data Digital Geospatial Data Moran Geochemistry Data File Format File Size Info Download moran
Publications - RI 2001-1D | Alaska Division of Geological & Geophysical
-geologic map of the Chulitna region, southcentral Alaska, scale 1:63,360 (16.0 M) Digital Geospatial Data Digital Geospatial Data Chulitna region engineering geology Data File Format File Size Info Download
1984-02-01
I . . . . . . An Introduction to Geometric Programming Patrick D. Allen and David W. Baker . . . . . . , . . . . . . . Space and Time...Zarwyn, US-Army Electronics R & D Comhiand GEOMETRIC PROGRAMING SPACE AND TIFFE ANALYSIS IN DYNAMIC PROGRAMING ALGORITHMS Renne..tf Stizti, AkeanXa...physical and parameter space can be connected by asymptotic matching. The purpose of the asymptotic analysis is to define the simplest problems
Griss, Johannes; Jones, Andrew R; Sachsenberg, Timo; Walzer, Mathias; Gatto, Laurent; Hartler, Jürgen; Thallinger, Gerhard G; Salek, Reza M; Steinbeck, Christoph; Neuhauser, Nadin; Cox, Jürgen; Neumann, Steffen; Fan, Jun; Reisinger, Florian; Xu, Qing-Wei; Del Toro, Noemi; Pérez-Riverol, Yasset; Ghali, Fawaz; Bandeira, Nuno; Xenarios, Ioannis; Kohlbacher, Oliver; Vizcaíno, Juan Antonio; Hermjakob, Henning
2014-10-01
The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
Griss, Johannes; Jones, Andrew R.; Sachsenberg, Timo; Walzer, Mathias; Gatto, Laurent; Hartler, Jürgen; Thallinger, Gerhard G.; Salek, Reza M.; Steinbeck, Christoph; Neuhauser, Nadin; Cox, Jürgen; Neumann, Steffen; Fan, Jun; Reisinger, Florian; Xu, Qing-Wei; del Toro, Noemi; Pérez-Riverol, Yasset; Ghali, Fawaz; Bandeira, Nuno; Xenarios, Ioannis; Kohlbacher, Oliver; Vizcaíno, Juan Antonio; Hermjakob, Henning
2014-01-01
The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online. PMID:24980485
FEQinput—An editor for the full equations (FEQ) hydraulic modeling system
Ancalle, David S.; Ancalle, Pablo J.; Domanski, Marian M.
2017-10-30
IntroductionThe Full Equations Model (FEQ) is a computer program that solves the full, dynamic equations of motion for one-dimensional unsteady hydraulic flow in open channels and through control structures. As a result, hydrologists have used FEQ to design and operate flood-control structures, delineate inundation maps, and analyze peak-flow impacts. To aid in fighting floods, hydrologists are using the software to develop a system that uses flood-plain models to simulate real-time streamflow.Input files for FEQ are composed of text files that contain large amounts of parameters, data, and instructions that are written in a format exclusive to FEQ. Although documentation exists that can aid in the creation and editing of these input files, new users face a steep learning curve in order to understand the specific format and language of the files.FEQinput provides a set of tools to help a new user overcome the steep learning curve associated with creating and modifying input files for the FEQ hydraulic model and the related utility tool, Full Equations Utilities (FEQUTL).
NASA Technical Reports Server (NTRS)
Ryan, J. W.; Ma, C.; Schupler, B. R.
1980-01-01
A data base handler which would act to tie Mark 3 system programs together is discussed. The data base handler is written in FORTRAN and is implemented on the Hewlett-Packard 21MX and the IBM 360/91. The system design objectives were to (1) provide for an easily specified method of data interchange among programs, (2) provide for a high level of data integrity, (3) accommodate changing requirments, (4) promote program accountability, (5) provide a single source of program constants, and (6) provide a central point for data archiving. The system consists of two distinct parts: a set of files existing on disk packs and tapes; and a set of utility subroutines which allow users to access the information in these files. Users never directly read or write the files and need not know the details of how the data are formatted in the files. To the users, the storage medium is format free. A user does need to know something about the sequencing of his data in the files but nothing about data in which he has no interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, Daniel H.; Ao, Tommy
The Sandia Data Archive (SDA) format is a specific implementation of the HDF5 (Hierarchal Data Format version 5) standard. The format was developed for storing data in a universally accessible manner. SDA files may contain one or more data records, each associated with a distinct text label. Primitive records provide basic data storage, while compound records support more elaborate grouping. External records allow text/binary files to be carried inside an archive and later recovered. This report documents version 1.0 of the SDA standard. The information provided here is sufficient for reading from and writing to an archive. Although the formatmore » was original designed for use in MATLAB, broader use is encouraged.« less
Do you also have problems with the file format syndrome?
De Cuyper, B; Nyssen, E; Christophe, Y; Cornelis, J
1991-11-01
In a biomedical data processing environment, an essential requirement is the ability to integrate a large class of standard modules for the acquisition, processing and display of the (image) data. Our approach to the management and manipulation of the different data formats is based on the specification of a common standard for the representation of data formats, called 'data nature descriptions' to emphasise that this representation not only specifies the structure but also the contents of data objects (files). The idea behind this concept is to associate each hardware and software component that produces or uses medical data with a description of the data objects manipulated by that component. In our approach a special software module (a format convertor generator) takes care of the appropriate data format conversions, required when two or more components of the system exchange data.
Use of Schema on Read in Earth Science Data Archives
NASA Astrophysics Data System (ADS)
Petrenko, M.; Hegde, M.; Smit, C.; Pilone, P.; Pham, L.
2017-12-01
Traditionally, NASA Earth Science data archives have file-based storage using proprietary data file formats, such as HDF and HDF-EOS, which are optimized to support fast and efficient storage of spaceborne and model data as they are generated. The use of file-based storage essentially imposes an indexing strategy based on data dimensions. In most cases, NASA Earth Science data uses time as the primary index, leading to poor performance in accessing data in spatial dimensions. For example, producing a time series for a single spatial grid cell involves accessing a large number of data files. With exponential growth in data volume due to the ever-increasing spatial and temporal resolution of the data, using file-based archives poses significant performance and cost barriers to data discovery and access. Storing and disseminating data in proprietary data formats imposes an additional access barrier for users outside the mainstream research community. At the NASA Goddard Earth Sciences Data Information Services Center (GES DISC), we have evaluated applying the "schema-on-read" principle to data access and distribution. We used Apache Parquet to store geospatial data, and have exposed data through Amazon Web Services (AWS) Athena, AWS Simple Storage Service (S3), and Apache Spark. Using the "schema-on-read" approach allows customization of indexing—spatial or temporal—to suit the data access pattern. The storage of data in open formats such as Apache Parquet has widespread support in popular programming languages. A wide range of solutions for handling big data lowers the access barrier for all users. This presentation will discuss formats used for data storage, frameworks with support for "schema-on-read" used for data access, and common use cases covering data usage patterns seen in a geospatial data archive.
17 CFR 240.13d-2 - Filing of amendments to Schedules 13D or 13G.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) The first electronic amendment to a paper format Schedule 13D (§ 240.13d-101 of this chapter) or... 17 Commodity and Securities Exchanges 3 2013-04-01 2013-04-01 false Filing of amendments to... Under the Securities Exchange Act of 1934 Regulation 13d-G § 240.13d-2 Filing of amendments to Schedules...
17 CFR 240.13d-2 - Filing of amendments to Schedules 13D or 13G.
Code of Federal Regulations, 2014 CFR
2014-04-01
...) The first electronic amendment to a paper format Schedule 13D (§ 240.13d-101 of this chapter) or... 17 Commodity and Securities Exchanges 4 2014-04-01 2014-04-01 false Filing of amendments to... Under the Securities Exchange Act of 1934 Regulation 13d-G § 240.13d-2 Filing of amendments to Schedules...
DOT National Transportation Integrated Search
2017-07-26
This zip file contains POSTDATA.ATT (.ATT); Print to File (.PRN); Portable Document Format (.PDF); and document (.DOCX) files of data to support FHWA-JPO-16-385, Analysis, modeling, and simulation (AMS) testbed development and evaluation to support d...
Smieszek, Tomas W.; Granato, Gregory E.
2000-01-01
Spatial data are important for interpretation of water-quality information on a regional or national scale. Geographic information systems (GIS) facilitate interpretation and integration of spatial data. The geographic information and data compiled for the conterminous United States during the National Highway Runoff Water-Quality Data and Methodology Synthesis project is described in this document, which also includes information on the structure, file types, and the geographic information in the data files. This 'geodata' directory contains two subdirectories, labeled 'gisdata' and 'gisimage.' The 'gisdata' directory contains ArcInfo coverages, ArcInfo export files, shapefiles (used in ArcView), Spatial Data Transfer Standard Topological Vector Profile format files, and meta files in subdirectories organized by file type. The 'gisimage' directory contains the GIS data in common image-file formats. The spatial geodata includes two rain-zone region maps and a map of national ecosystems originally published by the U.S. Environmental Protection Agency; regional estimates of mean annual streamflow, and water hardness published by the Federal Highway Administration; and mean monthly temperature, mean annual precipitation, and mean monthly snowfall modified from data published by the National Climatic Data Center and made available to the public by the Oregon Climate Service at Oregon State University. These GIS files were compiled for qualitative spatial analysis of available data on a national and(or) regional scale and therefore should be considered as qualitative representations, not precise geographic location information.
Electronic hand-drafting and picture management system.
Yang, Tsung-Han; Ku, Cheng-Yuan; Yen, David C; Hsieh, Wen-Huai
2012-08-01
The Department of Health of Executive Yuan in Taiwan (R.O.C.) is implementing a five-stage project entitled Electronic Medical Record (EMR) converting all health records from written to electronic form. Traditionally, physicians record patients' symptoms, related examinations, and suggested treatments on paper medical records. Currently when implementing the EMR, all text files and image files in the Hospital Information System (HIS) and Picture Archiving and Communication Systems (PACS) are kept separate. The current medical system environment is unable to combine text files, hand-drafted files, and photographs in the same system, so it is difficult to support physicians with the recording of medical data. Furthermore, in surgical and other related departments, physicians need immediate access to medical records in order to understand the details of a patient's condition. In order to address these problems, the Department of Health has implemented an EMR project, with the primary goal of building an electronic hand-drafting and picture management system (HDP system) that can be used by medical personnel to record medical information in a convenient way. This system can simultaneously edit text files, hand-drafted files, and image files and then integrate these data into Portable Document Format (PDF) files. In addition, the output is designed to fit a variety of formats in order to meet various laws and regulations. By combining the HDP system with HIS and PACS, the applicability can be enhanced to fit various scenarios and can assist the medical industry in moving into the final phase of EMR.
Organic geochemistry data of Alaska
complied by Threlkeld, Charles N.; Obuch, Raymond C.; Gunther, G.L.
2000-01-01
In order to archive the results of various petroleum geochemical analyses of the Alaska resource assessment, the USGS developed an Alaskan Organic Geochemical Data Base (AOGDB) in 1978 to house the data generated from USGS and subcontracted laboratories. Prior to the AOGDB, the accumulated data resided in a flat data file entitled 'PGS' that was maintained by Petroleum Information Corporation with technical input from the USGS. The information herein is a breakout of the master flat file format into a relational data base table format (akdata).
As-built design specification for the CLASFYG program
NASA Technical Reports Server (NTRS)
Horton, C. L. (Principal Investigator)
1981-01-01
This program produces a file with a Universal-formatted header and data records in a nonstandard format. Trajectory coefficients are calculated from 5 to 8 acquisitions of radiance values in the training field corresponding to an agricultural product. These coefficients are then used to calculate a time of emergence and corresponding trajectory coefficients for each pixel in the test field. The time of emergence, two of the coefficients, and the sigma value for each pixel are written to the file.
Atmospheric Science Data Center
2018-04-12
SSE Global Data Text files of monthly averaged data for the entire ... Version: V6 Location: Global Spatial Coverage: (90N, 90S)(180W,180E) ... File Format: ASCII Order Data: SSE Global Data: Order Data SCAR-B Block: ...
Easy Online Access to Helpful Internet Guides.
ERIC Educational Resources Information Center
Tuss, Joan
1993-01-01
Lists recommended guides to the Internet that are available electronically. Basic commands needed to use anonymous ftp (file transfer protocol) are explained. An annotation and command formats to access, scan, retrieve, and exit each file are included for 11 titles. (EAM)
Publications - RI 94-25 | Alaska Division of Geological & Geophysical
-materials map of the Anchorage C-7 NW Quadrangle, Alaska, scale 1:25,000 (1.4 M) Digital Geospatial Data Digital Geospatial Data Anchorage C-7 NW Derivative materials Data File Format File Size Info Download
Publications - RI 94-26 | Alaska Division of Geological & Geophysical
-materials map of the Anchorage C-8 NE Quadrangle, Alaska, scale 1:25,000 (3.8 M) Digital Geospatial Data Digital Geospatial Data Anchorage C-8 NE Derivative materials Data File Format File Size Info Download
Publications - RI 94-27 | Alaska Division of Geological & Geophysical
-materials map of the Anchorage C-8 NW Quadrangle, Alaska, scale 1:25,000 (676.0 M) Digital Geospatial Data Digital Geospatial Data Anchorage C-8 NW Derivative materials Data File Format File Size Info Download
Publications - RI 94-24 | Alaska Division of Geological & Geophysical
-materials map of the Anchorage C-7 NE Quadrangle, Alaska, scale 1:25,000 (2.4 M) Digital Geospatial Data Digital Geospatial Data Anchorage C-7 NE Derivative materials Data File Format File Size Info Download
Astronomical Instrumentation System Markup Language
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse M.
2016-05-01
The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.
An extended BET format for La RC shuttle experiments: Definition and development
NASA Technical Reports Server (NTRS)
Findlay, J. T.; Kelly, G. M.; Henry, M. W.
1981-01-01
A program for shuttle post-flight data reduction is discussed. An extended Best Estimate Trajectory (BET) file was developed. The extended format results in some subtle changes to the header record. The major change is the addition of twenty-six words to each data record. These words include atmospheric related parameters, body axis rate and acceleration data, computed aerodynamic coefficients, and angular accelerations. These parameters were added to facilitate post-flight aerodynamic coefficient determinations as well as shuttle entry air data sensor analyses. Software (NEWBET) was developed to generate the extended BET file utilizing the previously defined ENTREE BET, a dynamic data file which may be either derived inertial measurement unit data or aerodynamic coefficient instrument package data, and some atmospheric information.
Filtering NetCDF Files by Using the EverVIEW Slice and Dice Tool
Conzelmann, Craig; Romañach, Stephanie S.
2010-01-01
Network Common Data Form (NetCDF) is a self-describing, machine-independent file format for storing array-oriented scientific data. It was created to provide a common interface between applications and real-time meteorological and other scientific data. Over the past few years, there has been a growing movement within the community of natural resource managers in The Everglades, Fla., to use NetCDF as the standard data container for datasets based on multidimensional arrays. As a consequence, a need surfaced for additional tools to view and manipulate NetCDF datasets, specifically to filter the files by creating subsets of large NetCDF files. The U.S. Geological Survey (USGS) and the Joint Ecosystem Modeling (JEM) group are working to address these needs with applications like the EverVIEW Slice and Dice Tool, which allows users to filter grid-based NetCDF files, thus targeting those data most important to them. The major functions of this tool are as follows: (1) to create subsets of NetCDF files temporally, spatially, and by data value; (2) to view the NetCDF data in table form; and (3) to export the filtered data to a comma-separated value (CSV) file format. The USGS and JEM will continue to work with scientists and natural resource managers across The Everglades to solve complex restoration problems through technological advances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodall, John; Iannacone, Mike; Athalye, Anish
2013-08-01
Morph is a framework and domain-specific language (DSL) that helps parse and transform structured documents. It currently supports several file formats including XML, JSON, and CSV, and custom formats are usable as well.
ISMRM Raw Data Format: A Proposed Standard for MRI Raw Datasets
Inati, Souheil J.; Naegele, Joseph D.; Zwart, Nicholas R.; Roopchansingh, Vinai; Lizak, Martin J.; Hansen, David C.; Liu, Chia-Ying; Atkinson, David; Kellman, Peter; Kozerke, Sebastian; Xue, Hui; Campbell-Washburn, Adrienne E.; Sørensen, Thomas S.; Hansen, Michael S.
2015-01-01
Purpose This work proposes the ISMRM Raw Data (ISMRMRD) format as a common MR raw data format, which promotes algorithm and data sharing. Methods A file format consisting of a flexible header and tagged frames of k-space data was designed. Application Programming Interfaces were implemented in C/C++, MATLAB, and Python. Converters for Bruker, General Electric, Philips, and Siemens proprietary file formats were implemented in C++. Raw data were collected using MRI scanners from four vendors, converted to ISMRMRD format, and reconstructed using software implemented in three programming languages (C++, MATLAB, Python). Results Images were obtained by reconstructing the raw data from all vendors. The source code, raw data, and images comprising this work are shared online, serving as an example of an image reconstruction project following a paradigm of reproducible research. Conclusion The proposed raw data format solves a practical problem for the MRI community. It may serve as a foundation for reproducible research and collaborations. The ISMRMRD format is a completely open and community-driven format, and the scientific community is invited (including commercial vendors) to participate either as users or developers. PMID:26822475
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Integrated Multibeam and LIDAR Bathymetry Data Offshore of New London and Niantic, Connecticut
Poppe, L.J.; Danforth, W.W.; McMullen, K.Y.; Parker, Castle E.; Lewit, P.G.; Doran, E.F.
2010-01-01
Nearshore areas within Long Island Sound are of great interest to the Connecticut and New York research and resource management communities because of their ecological, recreational, and commercial importance. Although advances in multibeam echosounder technology permit the construction of high-resolution representations of sea-floor topography in deeper waters, limitations inherent in collecting fixed-angle multibeam data make using this technology in shallower waters (less than 10 meters deep) difficult and expensive. These limitations have often resulted in data gaps between areas for which multibeam bathymetric datasets are available and the adjacent shoreline. To address this problem, the geospatial data sets released in this report seamlessly integrate complete-coverage multibeam bathymetric data acquired off New London and Niantic Bay, Connecticut, with hydrographic Light Detection and Ranging (LIDAR) data acquired along the nearshore. The result is a more continuous sea floor representation and a much smaller gap between the digital bathymetric data and the shoreline than previously available. These data sets are provided online and on CD-ROM in Environmental Systems Research Institute (ESRI) raster-grid and GeoTIFF formats in order to facilitate access, compatibility, and utility.
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.