Compiler-Assisted Multiple Instruction Rollback Recovery Using a Read Buffer. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Alewine, Neal Jon
1993-01-01
Multiple instruction rollback (MIR) is a technique to provide rapid recovery from transient processor failures and was implemented in hardware by researchers and slow in mainframe computers. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs were also developed which remove rollback data hazards directly with data flow manipulations, thus eliminating the need for most data redundancy hardware. Compiler-assisted techniques to achieve multiple instruction rollback recovery are addressed. It is observed that data some hazards resulting from instruction rollback can be resolved more efficiently by providing hardware redundancy while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations were conducted which indicate improved efficiency over previous hardware-based and compiler-based schemes. Various enhancements to the compiler transformations and to the data redundancy hardware developed for the compiler-assisted MIR scheme are described and evaluated. The final topic deals with the application of compiler-assisted MIR techniques to aid in exception repair and branch repair in a speculative execution architecture.
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Chen, S.-K.; Fuchs, W. K.; Hwu, W.-M.
1993-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper focuses on compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. A compiler-assisted multiple instruction rollback scheme is developed which combines hardware-implemented data redundancy with compiler-driven hazard removal transformations. Experimental performance evaluations indicate improved efficiency over previous hardware-based and compiler-based schemes.
Compiler-assisted multiple instruction rollback recovery using a read buffer
NASA Technical Reports Server (NTRS)
Alewine, Neal J.; Chen, Shyh-Kwei; Fuchs, W. Kent; Hwu, Wen-Mei W.
1995-01-01
Multiple instruction rollback (MIR) is a technique that has been implemented in mainframe computers to provide rapid recovery from transient processor failures. Hardware-based MIR designs eliminate rollback data hazards by providing data redundancy implemented in hardware. Compiler-based MIR designs have also been developed which remove rollback data hazards directly with data-flow transformations. This paper describes compiler-assisted techniques to achieve multiple instruction rollback recovery. We observe that some data hazards resulting from instruction rollback can be resolved efficiently by providing an operand read buffer while others are resolved more efficiently with compiler transformations. The compiler-assisted scheme presented consists of hardware that is less complex than shadow files, history files, history buffers, or delayed write buffers, while experimental evaluation indicates performance improvement over compiler-based schemes.
12 CFR 1003.4 - Compilation of loan data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Compilation of loan data. 1003.4 Section 1003.4....4 Compilation of loan data. (a) Data format and itemization. A financial institution shall collect data regarding applications for, and originations and purchases of, home purchase loans, home...
NASA Astrophysics Data System (ADS)
Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.
2014-05-01
We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.
Compilation of seismic-refraction crustal data in the Soviet Union
Rodriguez, Robert; Durbin, William P.; Healy, J.H.; Warren, David H.
1964-01-01
The U.S. Geological Survey is preparing a series of terrain atlases of the Sino-Soviet bloc of nations for use in a possible nuclear-test detection program. Part of this project is concerned with the compilation and evaluation of crustal-structure data. To date, a compilation has been made of data from Russian publications that discuss seismic refraction and gravity studies of crustal structure. Although this compilation deals mainly with explosion seismic-refraction measurements, some results from earthquake studies are also included. None of the data have been evaluated.
HAL/S-FC compiler system functional specification
NASA Technical Reports Server (NTRS)
1974-01-01
Compiler organization is discussed, including overall compiler structure, internal data transfer, compiler development, and code optimization. The user, system, and SDL interfaces are described, along with compiler system requirements. Run-time software support package and restrictions and dependencies are also considered of the HAL/S-FC system.
12 CFR 338.8 - Compilation of loan data in register format.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...
12 CFR 338.8 - Compilation of loan data in register format.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...
12 CFR 338.8 - Compilation of loan data in register format.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 4 2011-01-01 2011-01-01 false Compilation of loan data in register format... OF GENERAL POLICY FAIR HOUSING Recordkeeping § 338.8 Compilation of loan data in register format. Banks and other lenders required to file a Home Mortgage Disclosure Act loan application register (LAR...
The paradigm compiler: Mapping a functional language for the connection machine
NASA Technical Reports Server (NTRS)
Dennis, Jack B.
1989-01-01
The Paradigm Compiler implements a new approach to compiling programs written in high level languages for execution on highly parallel computers. The general approach is to identify the principal data structures constructed by the program and to map these structures onto the processing elements of the target machine. The mapping is chosen to maximize performance as determined through compile time global analysis of the source program. The source language is Sisal, a functional language designed for scientific computations, and the target language is Paris, the published low level interface to the Connection Machine. The data structures considered are multidimensional arrays whose dimensions are known at compile time. Computations that build such arrays usually offer opportunities for highly parallel execution; they are data parallel. The Connection Machine is an attractive target for these computations, and the parallel for construct of the Sisal language is a convenient high level notation for data parallel algorithms. The principles and organization of the Paradigm Compiler are discussed.
Obtaining correct compile results by absorbing mismatches between data types representations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-03-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
Obtaining correct compile results by absorbing mismatches between data types representations
Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio
2017-11-21
Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.
McAdoo, Mitchell A.; Kozar, Mark D.
2017-11-14
This report describes a compilation of existing water-quality data associated with groundwater resources originating from abandoned underground coal mines in West Virginia. Data were compiled from multiple sources for the purpose of understanding the suitability of groundwater from abandoned underground coal mines for public supply, industrial, agricultural, and other uses. This compilation includes data collected for multiple individual studies conducted from July 13, 1973 through September 7, 2016. Analytical methods varied by the time period of data collection and requirements of the independent studies.This project identified 770 water-quality samples from 294 sites that could be attributed to abandoned underground coal mine aquifers originating from multiple coal seams in West Virginia.
Compiler writing system detail design specification. Volume 2: Component specification
NASA Technical Reports Server (NTRS)
Arthur, W. J.
1974-01-01
The logic modules and data structures composing the Meta-translator module are desribed. This module is responsible for the actual generation of the executable language compiler as a function of the input Meta-language. Machine definitions are also processed and are placed as encoded data on the compiler library data file. The transformation of intermediate language in target language object text is described.
Gillespie, Cindy L.; Grauch, V.J.S.; Oshetski, Kim; Keller, Gordon R.
2000-01-01
Principal facts for 156 new gravity stations in the southern Albuquerque basin are presented. These data fill a gap in existing data coverage. The compilation of the new data and two existing data sets into a regional data set of 5562 stations that cover the Albuquerque basin and vicinity is also described. Bouguer anomaly and isostatic residual gravity data for this regional compilation are available in digital form from ftp://greenwood.cr.usgs.gov/pub/openfile- reports/ofr-00-490.
Runtime support and compilation methods for user-specified data distributions
NASA Technical Reports Server (NTRS)
Ponnusamy, Ravi; Saltz, Joel; Choudhury, Alok; Hwang, Yuan-Shin; Fox, Geoffrey
1993-01-01
This paper describes two new ideas by which an HPF compiler can deal with irregular computations effectively. The first mechanism invokes a user specified mapping procedure via a set of compiler directives. The directives allow use of program arrays to describe graph connectivity, spatial location of array elements, and computational load. The second mechanism is a simple conservative method that in many cases enables a compiler to recognize that it is possible to reuse previously computed information from inspectors (e.g. communication schedules, loop iteration partitions, information that associates off-processor data copies with on-processor buffer locations). We present performance results for these mechanisms from a Fortran 90D compiler implementation.
Columbia River Component Data Evaluation Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
C.S. Cearlock
2006-08-02
The purpose of the Columbia River Component Data Compilation and Evaluation task was to compile, review, and evaluate existing information for constituents that may have been released to the Columbia River due to Hanford Site operations. Through this effort an extensive compilation of information pertaining to Hanford Site-related contaminants released to the Columbia River has been completed for almost 965 km of the river.
Distributed memory compiler design for sparse problems
NASA Technical Reports Server (NTRS)
Wu, Janet; Saltz, Joel; Berryman, Harry; Hiranandani, Seema
1991-01-01
A compiler and runtime support mechanism is described and demonstrated. The methods presented are capable of solving a wide range of sparse and unstructured problems in scientific computing. The compiler takes as input a FORTRAN 77 program enhanced with specifications for distributing data, and the compiler outputs a message passing program that runs on a distributed memory computer. The runtime support for this compiler is a library of primitives designed to efficiently support irregular patterns of distributed array accesses and irregular distributed array partitions. A variety of Intel iPSC/860 performance results obtained through the use of this compiler are presented.
Radiometric Survey in Western Afghanistan: A Website for Distribution of Data
Sweeney, Ronald E.; Kucks, Robert P.; Hill, Patricia L.; Finn, Carol A.
2007-01-01
Radiometric (uranium content, thorium content, potassium content, and gamma-ray intensity) and related data were digitized from radiometric and survey route location maps of western Afghanistan published in 1976. The uranium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Uranium (Radium) Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The thorium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Thorium Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The potassium content data were digitized along contour lines from 33 maps in a series entitled 'Map of Potassium Contents of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The gamma-ray intensity data were digitized along contour lines from 33 maps in a series entitled 'Map of Gamma-Field of Afghanistan (Western Area),' compiled by V. N. Kirsanov and R. S. Dershimanov. The survey route location data were digitized along flight-lines located on 33 maps in a series entitled 'Survey Routes Location and Contours of Flight Equal Altitudes. Western Area of Afghanistan,' compiled by Z. A. Alpatova, V. G. Kurnosov, and F. A. Grebneva.
The New Southern FIA Data Compilation System
V. Clark Baldwin; Larry Royer
2001-01-01
In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...
The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases.
Slater, Anthony Michael
2014-10-01
Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.
The IUPAC aqueous and non-aqueous experimental pKa data repositories of organic acids and bases
NASA Astrophysics Data System (ADS)
Slater, Anthony Michael
2014-10-01
Accurate and well-curated experimental pKa data of organic acids and bases in both aqueous and non-aqueous media are invaluable in many areas of chemical research, including pharmaceutical, agrochemical, specialty chemical and property prediction research. In pharmaceutical research, pKa data are relevant in ligand design, protein binding, absorption, distribution, metabolism, elimination as well as solubility and dissolution rate. The pKa data compilations of the International Union of Pure and Applied Chemistry, originally in book form, have been carefully converted into computer-readable form, with value being added in the process, in the form of ionisation assignments and tautomer enumeration. These compilations offer a broad range of chemistry in both aqueous and non-aqueous media and the experimental conditions and original reference for all pKa determinations are supplied. The statistics for these compilations are presented and the utility of the computer-readable form of these compilations is examined in comparison to other pKa compilations. Finally, information is provided about how to access these databases.
NASA Technical Reports Server (NTRS)
Rice, R. C.; Reynolds, J. L.
1976-01-01
Fatigue, fatigue-crack-propagation, and fracture data compiled and stored on magnetic tape are documented. Data for 202 and 7075 aluminum alloys, Ti-6Al-4V titanium alloy, and 300M steel are included in the compilation. Approximately 4,500 fatigue, 6,500 fatigue-crack-propagation, and 1,500 fracture data points are stored on magnetic tape. Descriptions of the data, an index to the data on the magnetic tape, information on data storage format on the tape, a listing of all data source references, and abstracts of other pertinent test information from each data source reference are included.
Compilation of gallium resource data for bauxite deposits
Schulte, Ruth F.; Foley, Nora K.
2014-01-01
Gallium (Ga) concentrations for bauxite deposits worldwide have been compiled from the literature to provide a basis for research regarding the occurrence and distribution of Ga worldwide, as well as between types of bauxite deposits. In addition, this report is an attempt to bring together reported Ga concentration data into one database to supplement ongoing U.S. Geological Survey studies of critical mineral resources. The compilation of Ga data consists of location, deposit size, bauxite type and host rock, development status, major oxide data, trace element (Ga) data and analytical method(s) used to derive the data, and tonnage values for deposits within bauxite provinces and districts worldwide. The range in Ga concentrations for bauxite deposits worldwide is
Dickens, Jade M.; Forbes, Brandon T.; Cobean, Dylan S.; Tadayon, Saeid
2011-01-01
An indirect method for estimating irrigation withdrawals is presented and results are compared to the 2005 USGS-reported irrigation withdrawals for selected States. This method is meant to demonstrate a way to check data reported or received from a third party, if metered data are unavailable. Of the 11 States where this method was applied, 8 States had estimated irrigation withdrawals that were within 15 percent of what was reported in the 2005 water-use compilation, and 3 States had estimated irrigation withdrawals that were more than 20 percent of what was reported in 2005. Recommendations for improving estimates of irrigated acreage and irrigation withdrawals also are presented in this report. Conveyance losses and irrigation-system efficiencies should be considered in order to achieve a more accurate representation of irrigation withdrawals. Better documentation of data sources and methods used can help lead to more consistent information in future irrigation water-use compilations. Finally, a summary of data sources and methods used to estimate irrigated acreage and irrigation withdrawals for the 2000 and 2005 compilations for each WSC is presented in appendix 1.
Compiling global name-space programs for distributed execution
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush
1990-01-01
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.
Compilation of giant electric dipole resonances built on excited states
NASA Astrophysics Data System (ADS)
Schiller, A.; Thoennessen, M.
2007-07-01
Giant Electric Dipole Resonance (GDR) parameters for γ decay to excited states with finite spin and temperature are compiled. Over 100 original works have been reviewed and from some 70 of them, about 350 sets of hot GDR parameters for different isotopes, excitation energies, and spin regions have been extracted. All parameter sets have been brought onto a common footing by calculating the equivalent Lorentzian parameters. The current compilation is complementary to an earlier compilation by Samuel S. Dietrich and Barry L. Berman (At. Data Nucl. Data Tables 38 (1988) 199-338) on ground-state photo-neutron and photo-absorption cross sections and their Lorentzian parameters. A comparison of the two may help shed light on the evolution of GDR parameters with temperature and spin. The present compilation is current as of July 2006.
NASA Astrophysics Data System (ADS)
Gerber, Florian; Mösinger, Kaspar; Furrer, Reinhard
2017-07-01
Software packages for spatial data often implement a hybrid approach of interpreted and compiled programming languages. The compiled parts are usually written in C, C++, or Fortran, and are efficient in terms of computational speed and memory usage. Conversely, the interpreted part serves as a convenient user-interface and calls the compiled code for computationally demanding operations. The price paid for the user friendliness of the interpreted component is-besides performance-the limited access to low level and optimized code. An example of such a restriction is the 64-bit vector support of the widely used statistical language R. On the R side, users do not need to change existing code and may not even notice the extension. On the other hand, interfacing 64-bit compiled code efficiently is challenging. Since many R packages for spatial data could benefit from 64-bit vectors, we investigate strategies to efficiently pass 64-bit vectors to compiled languages. More precisely, we show how to simply extend existing R packages using the foreign function interface to seamlessly support 64-bit vectors. This extension is shown with the sparse matrix algebra R package spam. The new capabilities are illustrated with an example of GIMMS NDVI3g data featuring a parametric modeling approach for a non-stationary covariance matrix.
Further developments in generating type-safe messaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, R.; King, C.; /Fermilab
2011-11-01
At ICALEPCS 09, we introduced a source code generator that allows processes to communicate safely using data types native to each host language. In this paper, we discuss further development that has occurred since the conference in Kobe, Japan, including the addition of three more client languages, an optimization in network packet size and the addition of a new protocol data type. The protocol compiler is continuing to prove itself as an easy and robust way to get applications written in different languages hosted on different computer architectures to communicate. We have two active Erlang projects that are using themore » protocol compiler to access ACNET data at high data rates. We also used the protocol compiler output to deliver ACNET data to an iPhone/iPad application. Since it takes an average of two weeks to support a new language, we're willing to expand the protocol compiler to support new languages that our community uses.« less
Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gupta, Manish
1992-01-01
Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.
A Compilation of Global Bio-Optical in Situ Data for Ocean-Colour Satellite Applications
NASA Technical Reports Server (NTRS)
Valente, Andre; Sathyendranath, Shubha; Brotus, Vanda; Groom, Steve; Grant, Michael; Taberner, Malcolm; Antoine, David; Arnone, Robert; Balch, William M.; Barker, Kathryn;
2016-01-01
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GePCO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi:10.1594PANGAEA.854832 (Valente et al., 2015).
A compilation and analysis of helicopter handling qualities data. Volume 1: Data compilation
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.; Lehman, J. M.; Vanwinkle, R. A.
1979-01-01
A collection of basic descriptive data, stability derivatives and transfer functions for six degrees of freedom, quasi-static model is introduced. The data are arranged in a common, compact format for each of the five helicopters represented. The vehicles studied include the BO-105, AH-1h, and the CH53D.
VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)
NASA Astrophysics Data System (ADS)
Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.
2017-08-01
Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).
Compilation of VS30 Data for the United States
Yong, Alan; Thompson, Eric M.; Wald, David J.; Knudsen, Keith L.; Odum, Jack K.; Stephenson, William J.; Haefner, Scott
2016-01-01
VS30, the time-averaged shear-wave velocity (VS) to a depth of 30 meters, is a key index adopted by the earthquake engineering community to account for seismic site conditions. VS30 is typically based on geophysical measurements of VS derived from invasive and noninvasive techniques at sites of interest. Owing to cost considerations, as well as logistical and environmental concerns, VS30 data are sparse or not readily available for most areas. Where data are available, VS30 values are often assembled in assorted formats that are accessible from disparate and (or) impermanent Web sites. To help remedy this situation, we compiled VS30 measurements obtained by studies funded by the U.S. Geological Survey (USGS) and other governmental agencies. Thus far, we have compiled VS30 values for 2,997 sites in the United States, along with metadata for each measurement from government-sponsored reports, Web sites, and scientific and engineering journals. Most of the data in our VS30 compilation originated from publications directly reporting the work of field investigators. A small subset (less than 20 percent) of VS30 values was previously compiled by the USGS and other research institutions. Whenever possible, VS30 originating from these earlier compilations were crosschecked against published reports. Both downhole and surface-based VS30 estimates are represented in our VS30 compilation. Most of the VS30 data are for sites in the western contiguous United States (2,141 sites), whereas 786 VS30 values are for sites in the Central and Eastern United States; 70 values are for sites in other parts of the United States, including Alaska (15 sites), Hawaii (30 sites), and Puerto Rico (25 sites). An interactive map is hosted on the primary USGS Web site for accessing VS30 data (http://earthquake.usgs.gov/research/vs30/).
How do I resolve problems reading the binary data?
Atmospheric Science Data Center
2014-12-08
... affecting compilation would be differing versions of the operating system and compilers the read software are being run on. Big ... Unix machines are Big Endian architecture while Linux systems are Little Endian architecture. Data generated on a Unix machine are ...
Efficient Type Representation in TAL
NASA Technical Reports Server (NTRS)
Chen, Juan
2009-01-01
Certifying compilers generate proofs for low-level code that guarantee safety properties of the code. Type information is an essential part of safety proofs. But the size of type information remains a concern for certifying compilers in practice. This paper demonstrates type representation techniques in a large-scale compiler that achieves both concise type information and efficient type checking. In our 200,000-line certifying compiler, the size of type information is about 36% of the size of pure code and data for our benchmarks, the best result to the best of our knowledge. The type checking time is about 2% of the compilation time.
5 CFR 9701.524 - Compilation and publication of data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be open...
5 CFR 9701.524 - Compilation and publication of data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 3 2011-01-01 2011-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be open...
5 CFR 9701.524 - Compilation and publication of data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 3 2014-01-01 2014-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be open...
5 CFR 9701.524 - Compilation and publication of data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 3 2012-01-01 2012-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be open...
5 CFR 9701.524 - Compilation and publication of data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 3 2013-01-01 2013-01-01 false Compilation and publication of data. 9701... publication of data. (a) The HSLRB must maintain a file of its proceedings and copies of all available... actions taken under § 9701.519. (b) All files maintained under paragraph (a) of this section must be open...
Outgassing Data for Selecting Spacecraft Materials
NASA Technical Reports Server (NTRS)
Campbell, William A., Jr.; Marriott, Richard S.; Park, John J.
1984-01-01
Outgassing data, derived from tests at 396 K (125 C) for 24 hours in vacuum as per ASTM E 595-77, have been compiled for numerous materials for spacecraft use. The data presented are the total mass loss (TML) and the collected volatile condensable materials (CVCM). The various materials are compiled by likely usage and alphabetically.
Publications - GPR 2016-1 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys Geophysical
Publications - GPR 2015-4 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey airborne geophysical survey data compilation Authors: Burns, L.E., Geoterrex-Dighem, Stevens Exploration airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical Surveys
Publications - GPR 2015-3 | Alaska Division of Geological & Geophysical
Geologic Mapping Advisory Board STATEMAP Publications Geophysics Program Information Geophysical Survey electromagnetic and magnetic airborne geophysical survey data compilation Authors: Burns, L.E., Fugro Airborne magnetic airborne geophysical survey data compilation: Alaska Division of Geological & Geophysical
A Compilation of Information on Computer Applications in Nutrition and Food Service.
ERIC Educational Resources Information Center
Casbergue, John P.
Compiled is information on the application of computer technology to nutrition food service. It is designed to assist dieticians and nutritionists interested in applying electronic data processing to food service and related industries. The compilation is indexed by subject area. Included for each subject area are: (1) bibliographic references,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semkova, Valentina; Otuka, Naohiko; Mikhailiukova, Marina
Members of the International Network of Nuclear Reaction Data Centres (NRDC) have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Here, recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.
Modal Composition and Age of Intrusions in North-Central and Northeast Nevada
du Bray, Edward A.; Crafford, A. Elizabeth Jones
2007-01-01
Introduction Data presented in this report characterize igneous intrusions of north-central and northeast Nevada and were compiled as part of the Metallogeny of the Great Basin project conducted by the U.S. Geological Survey (USGS) between 2001 and 2007. The compilation pertains to the area bounded by lats 38.5 and 42 N., long 118.5 W., and the Nevada-Utah border (fig. 1). The area contains numerous large plutons and smaller stocks but also contains equally numerous smaller, shallowly emplaced intrusions, including dikes, sills, and endogenous dome complexes. Igneous intrusions (hereafter, intrusions) of multiple ages are major constituents of the geologic framework of north-central and northeast Nevada (Stewart and Carlson, 1978). Mesozoic and Cenozoic intrusions are particularly numerous and considered to be related to subduction along the west edge of the North American plate during this time. Henry and Ressel (2000) and Ressel and others (2000) have highlighted the association between magmatism and ore deposits along the Carlin trend. Similarly, Theodore (2000) has demonstrated the association between intrusions and ore deposits in the Battle Mountain area. Decades of geologic investigations in north-central and northeast Nevada (hereafter, the study area) demonstrate that most hydrothermal ore deposits are spatially, and probably temporally and genetically, associated with intrusions. Because of these associations, studies of many individual intrusions have been conducted, including those by a large number of Master's and Doctoral thesis students (particularly University of Nevada at Reno students and associated faculty), economic geologists working on behalf of exploration and mining companies, and USGS earth scientists. Although the volume of study area intrusions is large and many are associated with ore deposits, no synthesis of available data that characterize these rocks has been assembled. Compilations that have been produced for intrusions in Nevada pertain to relatively restricted geographic areas and (or) do not include the broad array of data that would best aid interpretation of these rocks. For example, Smith and others (1971) presented potassium-argon geochronologic and basic petrographic data for a limited number of intrusions in northcentral Nevada. Similarly, Silberman and McKee (1971) presented potassium-argon geochronologic data for a significant number of central Nevada intrusions. More recently, Mortensen and others (2000) presented uranium-lead geochronology for a small number of central Nevada intrusions. Sloan and others (2003) released a national geochronologic database that contains age determinations made prior to 1991 for rocks of Nevada. Finally, C.D. Henry (Nevada Bureau of Mines and Geology, written commun., 2006) has assembled geochronologic data for igneous rocks of Nevada produced subsequent to completion of the Sloan and others (2003) compilation. Consequently, although age data for igneous rocks of Nevada have been compiled, data pertaining to other features of these rocks have not been systematically synthesized. Maldonado and others (1988) compiled the distribution and some basic characteristics of intrusions throughout Nevada. Lee (1984), John (1983, 1987, and 1992), John and others (1994), and Ressel (2005) have compiled data that partially characterize intrusions in some parts of the study area. This report documents the first phase of an effort to compile a robust database for study area intrusions; in this initial phase, modal composition and age data are synthesized. In the next phase, geochemical data available for these rocks will be compiled. The ultimate goal is to compile data as a basis for an evaluation of the time-space-compositional evolution of Mesozoic and Cenozoic magmatism in the study area and identification of genetic associations between magmatism and mineralizing processes in this region.
Compilation of current high energy physics experiments - Sept. 1978
DOE Office of Scientific and Technical Information (OSTI.GOV)
Addis, L.; Odian, A.; Row, G. M.
1978-09-01
This compilation of current high-energy physics experiments is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and the nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. Nominally, the compilation includes summaries of all high-energy physics experiments at the above laboratories that were approved (and not subsequently withdrawn) before about June 1978, and had not completed taking of data by 1 January 1975. The experimental summaries are supplemented with three indexes to the compilation, several vocabulary lists giving names or abbreviations used, and a short summary ofmore » the beams at each of the laboratories (except Rutherford). The summaries themselves are included on microfiche. (RWR)« less
A compilation of global bio-optical in situ data for ocean-colour satellite applications
NASA Astrophysics Data System (ADS)
Valente, André; Sathyendranath, Shubha; Brotas, Vanda; Groom, Steve; Grant, Michael; Taberner, Malcolm; Antoine, David; Arnone, Robert; Balch, William M.; Barker, Kathryn; Barlow, Ray; Bélanger, Simon; Berthon, Jean-François; Beşiktepe, Şükrü; Brando, Vittorio; Canuti, Elisabetta; Chavez, Francisco; Claustre, Hervé; Crout, Richard; Frouin, Robert; García-Soto, Carlos; Gibb, Stuart W.; Gould, Richard; Hooker, Stanford; Kahru, Mati; Klein, Holger; Kratzer, Susanne; Loisel, Hubert; McKee, David; Mitchell, Brian G.; Moisan, Tiffany; Muller-Karger, Frank; O'Dowd, Leonie; Ondrusek, Michael; Poulton, Alex J.; Repecaud, Michel; Smyth, Timothy; Sosik, Heidi M.; Twardowski, Michael; Voss, Kenneth; Werdell, Jeremy; Wernand, Marcel; Zibordi, Giuseppe
2016-06-01
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi:10.1594/PANGAEA.854832 (Valente et al., 2015).
Ada technology support for NASA-GSFC
NASA Technical Reports Server (NTRS)
1986-01-01
Utilization of the Ada programming language and environments to perform directorate functions was reviewed. The Mission and Data Operations Directorate Network (MNET) conversion effort was chosen as the first task for evaluation and assistance. The MNET project required the rewriting of the existing Network Control Program (NCP) in the Ada programming language. The DEC Ada compiler running on the VAX under WMS was used for the initial development efforts. Stress tests on the newly delivered version of the DEC Ada compiler were performed. The new Alsys Ada compiler was purchased for the IBM PC AT. A prevalidated version of the compiler was obtained. The compiler was then validated.
Andrew Fowler
2015-10-01
Compilation of rare earth element and associated major and minor dissolved constituent analytical data for USA geothermal fields and global seafloor hydrothermal vents. Data is in original units. Reference to and use of this data should be attributed to the original authors and publications according to the provisions outlined therein.
Outgassing Data for Selecting Spacecraft Materials. Revision 4
NASA Technical Reports Server (NTRS)
Walter, Neil A.; Scialdone, John J.
1997-01-01
Outgassing data, derived from tests at 398 K (125 C) for 24 hours in a vacuum as per ASTM E 595-84, have been compiled for numerous materials for spacecraft use. The data presented are the total mass loss (TML) and the collected volatile condensable materials (CVCM). The various materials are compiled by likely usage and alphabetically.
76 FR 4703 - Statement of Organization, Functions, and Delegations of Authority
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-26
... regarding medical loss ratio standards and the insurance premium rate review process, and issues premium... Oriented Plan program. Collects, compiles and maintains comparative pricing data for an Internet portal... benefit from the new health insurance system. Collects, compiles and maintains comparative pricing data...
Code of Federal Regulations, 2011 CFR
2011-04-01
... moderate income areas that house various non-legislative functions or services provided by the government... from data compiled and published by the United States Bureau of the Census available from the latest... persons per room, based on data compiled and published by the United States Bureau of the Census available...
A Model (Formula) for Deriving A Hazard Index of Rail-Highway Grade Crossings.
ERIC Educational Resources Information Center
Coburn, James Minton
The purpose of this research was to compile data for use as related information in the education of drivers, and to derive a formula for computing a hazard index for rail-highway intersections. Data for the study were compiled from: (1) all crossings on which field data were collected, (2) reports of 642 accidents, and (3) data collected from…
A compilation and analysis of helicopter handling qualities data. Volume 2: Data analysis
NASA Technical Reports Server (NTRS)
Heffley, R. K.
1979-01-01
A compilation and an analysis of helicopter handling qualities data are presented. Multiloop manual control methods are used to analyze the descriptive data, stability derivatives, and transfer functions for a six degrees of freedom, quasi static model. A compensatory loop structure is applied to coupled longitudinal, lateral and directional equations in such a way that key handling qualities features are examined directly.
Massive Data Collection: Scientists' Nuggets and Basis for the Future
NASA Astrophysics Data System (ADS)
Kisimoto, K.
2007-12-01
Particularly after the advent of the multi-beam echosounders (aka, swath mapping system), accumulation of high- resolution bathymetric data invoked tremendous impact on marine sciences worldwide. In the last two decades with rapid improvements of the swath-mapping technologies, our knowledge and view of the seafloor made great advancement. Japan as a coastal state has also been conducting quite extensive and intensive "Continental Shelf Survey" for many years now and has benefited from the swath mapping technology as well. Japanese EEZ covers wide area of the northwestern Pacific and shares borders with tectonically complex and scientifically challenged neighboring regions. The "Continental Shelf Survey" of Japan is a multi-institutional effort by private, academic and governmental sectors, administered by the government. Huge amounts of marine geological, geophysical and bathymetric data are still being collected, compiled and analyzed at each sector who has its responsibility and priority, so the full access to the compiled scientific data or the disclosure of the data to science community would take some more time in future. But the parts of scientific results and data have been presented and published as they come out, at the meetings and in journals, which is also the policy of the administering government. Eyes only preview of the ongoing compiled data is not prohibited, so the international scientific cooperation discussion, for example, could be started earlier and the session like this is a best opportunity for marine scientists to be aware of what we have and what we should have for regional/global sciences to the next step, which are generally costly pursued separately. I will present and discuss on the compiled bathymetric map of the northwestern Pacific together with geophysical data or meta-data in the same region, e.g. gravity, magnetic and seismic data compiled.
Additions to the rust fungi (Pucciniales) from northern Oman
USDA-ARS?s Scientific Manuscript database
The first compilation of the rust fungi occurring in the Sultanate of Oman is presented based on historical records and numerous recent collections, primarily from agricultural hosts. The study compiles data for 16 species of Pucciniales in northern Oman, along with voucher and sequence data and pre...
DOT National Transportation Integrated Search
1998-11-01
In this annual report, Traffic Safety Facts 1997: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...
DOT National Transportation Integrated Search
2007-01-01
In this annual report, Traffic Safety Facts 2007: A Compilation of Motor Vehicle Crash Data from the Fatality : Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration : (NHTSA) presents descript...
DOT National Transportation Integrated Search
2008-01-01
In this annual report, Traffic Safety Facts 2008: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...
DOT National Transportation Integrated Search
2009-01-01
In this annual report, Traffic Safety Facts 2009: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis Reporting System and the General Estimates System, the National Highway Traffic Safety Administration (NHTSA) presents descriptive ...
Modular implementation of a digital hardware design automation system
NASA Astrophysics Data System (ADS)
Masud, M.
An automation system based on AHPL (A Hardware Programming Language) was developed. The project may be divided into three distinct phases: (1) Upgrading of AHPL to make it more universally applicable; (2) Implementation of a compiler for the language; and (3) illustration of how the compiler may be used to support several phases of design activities. Several new features were added to AHPL. These include: application-dependent parameters, mutliple clocks, asynchronous results, functional registers and primitive functions. The new language, called Universal AHPL, has been defined rigorously. The compiler design is modular. The parsing is done by an automatic parser generated from the SLR(1)BNF grammar of the language. The compiler produces two data bases from the AHPL description of a circuit. The first one is a tabular representation of the circuit, and the second one is a detailed interconnection linked list. The two data bases provide a means to interface the compiler to application-dependent CAD systems.
Mashburn, Shana L.; Winton, Kimberly T.
2010-01-01
This CD-ROM contains spatial datasets that describe natural and anthropogenic features and county-level estimates of agricultural pesticide use and pesticide data for surface-water, groundwater, and biological specimens in the state of Oklahoma. County-level estimates of pesticide use were compiled from the Pesticide National Synthesis Project of the U.S. Geological Survey, National Water-Quality Assessment Program. Pesticide data for surface water, groundwater, and biological specimens were compiled from U.S. Geological Survey National Water Information System database. These spatial datasets that describe natural and manmade features were compiled from several agencies and contain information collected by the U.S. Geological Survey. The U.S. Geological Survey datasets were not collected specifically for this compilation, but were previously collected for projects with various objectives. The spatial datasets were created by different agencies from sources with varied quality. As a result, features common to multiple layers may not overlay exactly. Users should check the metadata to determine proper use of these spatial datasets. These data were not checked for accuracy or completeness. If a question of accuracy or completeness arise, the user should contact the originator cited in the metadata.
Defense and Development in Sub-Saharan Africa: Codebook.
1988-03-01
countries by presenting the different data sources and explaining how they were compiled. The statistics in the 0 database cover 41 African countries for...February 1984, pp. 157-164 -vi Finally, in addition to the economic and military data , some statistics have been compiled that monitor social and...32 IX. SOCIAL/POLITICAL STATISTICS ....................................34 SOURCES AND NOTES ON COLLECTION OF DATA
A data structure for describing sampling designs to aid in compilation of stand attributes
John C. Byrne; Albert R. Stage
1988-01-01
Maintaining permanent plot data with different sampling designs over long periods within an organization, and sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is composed of just those variables and their relationships needed to compile...
Brock Stewart; Chris J. Cieszewski; Michal Zasada
2005-01-01
This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...
Development of Nautical Almanac at Korea Astronomy Observatory
NASA Astrophysics Data System (ADS)
Han, In-Woo; Shin, Junho
1994-12-01
In Korea Astronomy Observatory, we developed a S/W package to compile the Korean Nautical Almanac. We describe the motivation to develop the S/W and explain the S/W package in general terms. In appendix, we describe the procedure to calculate the polaris table in more detail. When we developed the S/W, we paid much attention to produce accurate data. We also made great effort to automate the compilation of Nautical Almanac as far as possible, since the compilation is time consuming labour extensive. As a result, the S/W we developed turns out to be very accurate and efficient to compile Nautical Almanac. In fact, we could compile a Korean Nautical Almanac in a few days.
Workflow with pitfalls to derive a regional airborne magnetic compilation
NASA Astrophysics Data System (ADS)
Brönner, Marco; Baykiev, Eldar; Ebbing, Jörg
2017-04-01
Today, large scale magnetic maps are usually a patchwork of different airborne surveys from different size, different resolution and different years. Airborne magnetic acquisition is a fast and economic method to map and gain geological and tectonic information for large areas, onshore and offshore. Depending on the aim of a survey, acquisition parameters like altitude and profile distance are usually adjusted to match the purpose of investigation. The subsequent data processing commonly follows a standardized workflow comprising core-field subtraction and line leveling to yield a coherent crustal field magnetic grid for a survey area. The resulting data makes it possible to correlate with geological and tectonic features in the subsurface, which is of importance for e.g. oil and mineral exploration. Crustal scale magnetic interpretation and modeling demand regional compilation of magnetic data and the merger of adjacent magnetic surveys. These studies not only focus on shallower sources, reflected by short to intermediate magnetic wavelength anomalies, but also have a particular interest in the long wavelength deriving from deep seated sources. However, whilst the workflow to produce such a merger is supported by quite a few powerful routines, the resulting compilation contains several pitfalls and limitations, which were discussed before, but still are very little recognized. The maximum wavelength that can be resolved of each individual survey is directly related to the survey size and consequently a merger will contribute erroneous long-wavelength components in the magnetic data compilation. To minimize this problem and to homogenous the longer wavelengths, a first order approach is the combination of airborne and satellite magnetic data commonly combined with the compilation from airborne data, which is sufficient only under particular preconditions. A more advanced approach considers the gap in frequencies between airborne and satellite data, which motivated countries like Sweden and Australia (AWAGS) to collect high altitude- long distance airborne magnetic data for the entire country to homogenous the high-resolution magnetic data before the merger with satellite data. We present the compilation of a regional magnetic map for an area in northern Europe and discuss the problems and pitfalls for a common workflow applied.
NIST Databases on Atomic Spectra
NASA Astrophysics Data System (ADS)
Reader, J.; Wiese, W. L.; Martin, W. C.; Musgrove, A.; Fuhr, J. R.
2002-11-01
The NIST atomic and molecular spectroscopic databases now available on the World Wide Web through the NIST Physics Laboratory homepage include Atomic Spectra Database, Ground Levels and Ionization Energies for the Neutral Atoms, Spectrum of Platinum Lamp for Ultraviolet Spectrograph Calibration, Bibliographic Database on Atomic Transition Probabilities, Bibliographic Database on Atomic Spectral Line Broadening, and Electron-Impact Ionization Cross Section Database. The Atomic Spectra Database (ASD) [1] offers evaluated data on energy levels, wavelengths, and transition probabilities for atoms and atomic ions. Data are given for some 950 spectra and 70,000 energy levels. About 91,000 spectral lines are included, with transition probabilities for about half of these. Additional data resulting from our ongoing critical compilations will be included in successive new versions of ASD. We plan to include, for example, our recently published data for some 16,000 transitions covering most ions of the iron-group elements, as well as Cu, Kr, and Mo [2]. Our compilations benefit greatly from experimental and theoretical atomic-data research being carried out in the NIST Atomic Physics Division. A new compilation covering spectra of the rare gases in all stages of ionization, for example, revealed a need for improved data in the infrared. We have thus measured these needed data with our high-resolution Fourier transform spectrometer [3]. An upcoming new database will give wavelengths and intensities for the stronger lines of all neutral and singly-ionized atoms, along with energy levels and transition probabilities for the persistent lines [4]. A critical compilation of the transition probabilities of Ba I and Ba II [5] has been completed and several other compilations of atomic transition probabilities are nearing completion. These include data for all spectra of Na, Mg, Al, and Si [6]. Newly compiled data for selected ions of Ne, Mg, Si and S, will form the basis for a new database intended to assist interpretation of soft x-ray astronomical spectra, such as from the Chandra X-ray Observatory. These data will be available soon on the World Wide Web [7].
Guidelines for preparation of state water-use estimates for 2000
Kenny, Joan F.
2004-01-01
This report describes the water-use categories and data elements required for the 2000 national water-use compilation conducted by the U.S. Geological Survey (USGS) as part of its National Water Use Information Program. It identifies sources of water-use information, guidelines for estimating water use, and required documentation for preparation of the national compilation by State for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. The data are published in USGS Circular 1268, Estimated Use of Water in the United States in 2000. USGS has published circulars on estimated use of water in the United States at 5-year intervals since 1950. As part of this USGS program to document water use on a national scale for the year 2000, all States prepare estimates of water withdrawals for public supply, industrial, irrigation, and thermoelectric power generation water uses at the county level. All States prepare estimates of domestifc use and population served by public supply at least at the State level. All States provide estimates of irrigated acres by irrigation system type (sprinkler, surface, or microirrigation) at the county level. County-level estimates of withdrawals for mining, livestock, and aquaculture uses are compiled by selected States that comprised the largest percentage of national use in 1995 for these categories, and are optional for other States. Ground-water withdrawals for public-supply, industrial, and irrigation use are aggregated by principal aquifer or aquifer system, as identified by the USGS Office of Ground Water. Some categories and data elements that were mandatory in previous compilations are optional for the 2000 compilation, in response to budget considerations at the State level. Optional categories are commercial, hydroelectric, and wastewater treatment. Estimation of deliveries from public supply to domestic, commercial, industrial, and thermoelectric uses, consumptive use for any category, and irrigation conveyance loss are optional data elements. Aggregation of data by the eight-digit hydrologic cataloging unit is optional. Water-use data compiled by the States are stored in the USGS Aggregated Water-Use Data System (AWUDS). This database is designed to store both mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and also produces tables of water-use data compiled for 1985, 1990, 1995, and 2000. These water-use data are used by USGS, other agencies, organizations, academic institutions, and the public for research, water-management decisions, trend analysis, and forecasting.
The NASA earth resources spectral information system: A data compilation
NASA Technical Reports Server (NTRS)
Leeman, V.; Earing, D.; Vincent, R. K.; Ladd, S.
1971-01-01
The NASA Earth Resources Spectral Information System and the information contained therein are described. It contains an ordered, indexed compilation of natural targets in the optical region from 0.3 to 45.0 microns. The data compilation includes approximately 100 rock and mineral, 2600 vegetation, 1000 soil, and 60 water spectral reflectance, transmittance, and emittance curves. Most of the data have been categorized by subject, and the curves in those subject areas have been plotted on a single graph. Those categories with too few curves and miscellaneous categories have been plotted as single-curve graphs. Each graph, composite of single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.
SEGY to ASCII: Conversion and Plotting Program
Goldman, Mark R.
1999-01-01
This report documents a computer program to convert standard 4 byte, IBM floating point SEGY files to ASCII xyz format. The program then optionally plots the seismic data using the GMT plotting package. The material for this publication is contained in a standard tar file (of99-126.tar) that is uncompressed and 726 K in size. It can be downloaded by any Unix machine. Move the tar file to the directory you wish to use it in, then type 'tar xvf of99-126.tar' The archive files (and diskette) contain a NOTE file, a README file, a version-history file, source code, a makefile for easy compilation, and an ASCII version of the documentation. The archive files (and diskette) also contain example test files, including a typical SEGY file along with the resulting ASCII xyz and postscript files. Requirements for compiling the source code into an executable are a C++ compiler. The program has been successfully compiled using Gnu's g++ version 2.8.1, and use of other compilers may require modifications to the existing source code. The g++ compiler is a free, high quality C++ compiler and may be downloaded from the ftp site: ftp://ftp.gnu.org/gnu Requirements for plotting the seismic data is the existence of the GMT plotting package. The GMT plotting package may be downloaded from the web site: http://www.soest.hawaii.edu/gmt/
Solid state technology: A compilation. [on semiconductor devices
NASA Technical Reports Server (NTRS)
1973-01-01
A compilation, covering selected solid state devices developed and integrated into systems by NASA to improve performance, is presented. Data are also given on device shielding in hostile radiation environments.
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Astin, Helen S.; Cross, Patricia H.
Data tables are compiled on the characteristics of black freshmen entering a representative sanple of 393 predominately black and predominately white academic institutions. Using a ten percent random subsample of original data compiled by Alexander W. Astin for the Cooperative Institutional Research program, the researchers present extensive…
Application of spatial technologies in wildlife biology.
Thomas A. O' Neil; Pete Bettinger; Bruce G. Marcot; B. Wayne Luscombe; Gregory T. Koeln; Howard J. Bruner; Charley Barrett; Jennifer A. Pollock; Susan Bernatas
2005-01-01
The Information Age is here, and technology has a large and important role in gathering, compiling, and synthesizing data. The old adage of analyzing wildlife data over "time and space" today entails using technologies to help gather, compile, and synthesize remotely sensed information, and to integrate results into research, monitoring and evaluation. Thus,...
EXFOR – a global experimental nuclear reaction data repository: Status and new developments
Semkova, Valentina; Otuka, Naohiko; Mikhailiukova, Marina; ...
2017-09-13
Members of the International Network of Nuclear Reaction Data Centres (NRDC) have collaborated since the 1960s on the worldwide collection, compilation and dissemination of experimental nuclear reaction data. New publications are systematically complied, and all agreed data assembled and incorporated within the EXFOR database. Here, recent upgrades to achieve greater completeness of the contents are described, along with reviews and adjustments of the compilation rules for specific types of data.
1988-07-28
r R ~l~ F COPV en Data Entered) AT ION PAGE -ErOP RCoMrE-EoNGFOP. A D-A 204 928 1Z. GOVT ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER 4...PAGE (When Data Entered) Ada Compiler Validation Summary Repor-: Compiler Name: DACS-386/UNIX, Version 4.2 Certificate Number: 880728S1.09141 Host...which have the STORAGE SIZE length clause were changed to comment lines under the direction of the AVF Manager . These modified tests ran to a successful
Map and Data for Quaternary Faults and Fault Systems on the Island of Hawai`i
Cannon, Eric C.; Burgmann, Roland; Crone, Anthony J.; Machette, Michael N.; Dart, Richard L.
2007-01-01
Introduction This report and digitally prepared, GIS-based map is one of a series of similar products covering individual states or regions of United States that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. It is part of a continuing the effort to compile a comprehensive Quaternary fault and fold map and database for the United States, which is supported by the U.S. Geological Survey's (USGS) Earthquake Hazards Program. Guidelines for the compilation of the Quaternary fault and fold maps for the United States were published by Haller and others (1993) at the onset of this project. This compilation of Quaternary surface faulting and folding in Hawai`i is one of several similar state and regional compilations that were planned for the United States. Reports published to date include West Texas (Collins and others, 1996), New Mexico (Machette and others, 1998), Arizona (Pearthree, 1998), Colorado (Widmann and others, 1998), Montana (Stickney and others, 2000), Idaho (Haller and others, 2005), and Washington (Lidke and others, 2003). Reports for other states such as California and Alaska are still in preparation. The primary intention of this compilation is to aid in seismic-hazard evaluations. The report contains detailed information on the location and style of faulting, the time of most recent movement, and assigns each feature to a slip-rate category (as a proxy for fault activity). It also contains the name and affiliation of the compiler, date of compilation, geographic and other paleoseismologic parameters, as well as an extensive set of references for each feature. The map (plate 1) shows faults, volcanic rift zones, and lineaments that show evidence of Quaternary surface movement related to faulting, including data on the time of most recent movement, sense of movement, slip rate, and continuity of surface expression. This compilation is presented as a digitally prepared map product and catalog of data, both in Adobe Acrobat PDF format. The senior authors (Eric C. Cannon and Roland Burgmann) compiled the fault data as part of ongoing studies of active faulting on the Island of Hawai`i. The USGS is responsible for organizing and integrating the State or regional products under their National Seismic Hazard Mapping project, including the coordination and oversight of contributions from individuals and groups (Michael N. Machette and Anthony J. Crone), database design and management (Kathleen M. Haller), and digitization and analysis of map data (Richard L. Dart). After being released an Open-File Report, the data in this report will be available online at http://earthquake.usgs.gov/regional/qfaults/, the USGS Quaternary Fault and Fold Database of the United States.
ERIC Educational Resources Information Center
California Child Care Resource and Referral Network, San Francisco.
This report compiles standardized data on child care supply and requests for care in California. The report provides county and statewide information based on responses from about 42,000 child care providers and more than 55,000 parents over a 3-month period and on data from state and federal government agencies, including: (1) demographic…
Investigation using data in Alabama from ERTS-A
NASA Technical Reports Server (NTRS)
Henry, H. R. (Principal Investigator)
1972-01-01
There are no author-identified significant results in this report. Brief summaries are presented of accomplishments by the state of Alabama in the areas of: (1) investigation of environmental factors; (2) land use compilation; (3) data processing for land use compilation; (4) photo-reproduction and unsupervised land use classification from digital tape; (5) data collection buoys; and (6) activities of the Geological Survey of Alabama.
Parliman, D.J.
1982-01-01
Well-inventory and groundwater-quality data for 665 sites with a total of 1,318 chemical analyses were compiled from Elmore, Owyhee, Ada, and Canyon Counties. Data are sorted by water temperature (less than 20 degrees Celsius is considered nonthermal; 20 degrees Celcius or greater is considered thermal) to facilitate their use.
Bove, Dana J.; Knepper, Daniel H.
2000-01-01
This data set covering the western part of Colorado includes water quality data from eight different sources (points), nine U.S. Geological Survey Digital Raster Graph (DRG) files for topographic bases, a compilation of Tertiary age intrusions (polygons and lines), and two geotiff files showing areas of hydrothermally altered rock. These data were compiled for use with an ongoing mineral resource assessment of theGrand Mesa, Uncompahgre, and Gunnison National Forests (GMUG) and intervening Bureau of Land Management(BLM) lands. This compilation was assembled to give federal land managers a preliminary view of water within sub-basinal areas, and to show possible relationships to Tertiary age intrusion and areas of hydrothermal alteration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, G.C.; Stevens, P.R.; Rittenberg, A.
A compilation is presented of reaction data taken from experimental high energy physics journal articles, reports, preprints, theses, and other sources. Listings of all the data are given, and the data points are indexed by reaction and momentum, as well as by their source document. Much of the original compilation was done by others working in the field. The data presented also exist in the form of a computer-readable and searchable database; primitive access facilities for this database are available.
Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning
NASA Technical Reports Server (NTRS)
Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri
1991-01-01
Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.
COMPILATION OF CONVERSION COEFFICIENTS FOR THE DOSE TO THE LENS OF THE EYE
2017-01-01
Abstract A compilation of fluence-to-absorbed dose conversion coefficients for the dose to the lens of the eye is presented. The compilation consists of both previously published data and newly calculated values: photon data (5 keV–50 MeV for both kerma approximation and full electron transport), electron data (10 keV–50 MeV), and positron data (1 keV–50 MeV) – neutron data will be published separately. Values are given for angles of incidence from 0° up to 90° in steps of 15° and for rotational irradiation. The data presented can be downloaded from this article's website and they are ready for use by Report Committee (RC) 26. This committee has been set up by the International Commission on Radiation Units and Measurements (ICRU) and is working on a ‘proposal for a redefinition of the operational quantities for external radiation exposure’. PMID:27542816
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
libvaxdata: VAX data format conversion routines
Baker, Lawrence M.
2005-01-01
libvaxdata provides a collection of routines for converting numeric data-integer and floating-point-to and from the formats used on a Digital Equipment Corporation1 (DEC) VAX 32-bit minicomputer (Brunner, 1991). Since the VAX numeric data formats are inherited from those used on a DEC PDP-11 16-bit minicomputer, these routines can be used to convert PDP-11 data as well. VAX numeric data formats are also the default data formats used on DEC Alpha 64-bit minicomputers running OpenVMS The libvaxdata routines are callable from Fortran or C. They require that the caller use two's-complement format for integer data and IEEE 754 format (ANSI/IEEE, 1985) for floating-point data. They also require that the 'natural' size of a C int type (integer) is 32 bits. That is the case for most modern 32-bit and 64-bit computer systems. Nevertheless, you may wish to consult the Fortran or C compiler documentation on your system to be sure. Some Fortran compilers support conversion of VAX numeric data on-the-fly when reading or writing unformatted files, either as a compiler option or a run-time I/O option. This feature may be easier to use than the libvaxdata routines. Consult the Fortran compiler documentation on your system to determine if this alternative is available to you. 1Later Compaq Computer Corporation, now Hewlett-Packard Company
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-04
... use of statistical compilations of data under section 7216 of the Internal Revenue Code (Code) by a... preparation business, including identification of additional limited circumstances when a tax return preparer... tax return business under Sec. 301.7216-2(n); disclose and use statistical compilations of data...
A compiled catalog of rotation measures of radio point sources
NASA Astrophysics Data System (ADS)
Xu, Jun; Han, Jin-Lin
2014-08-01
We compiled a catalog of Faraday rotation measures (RMs) for 4553 extragalactic radio point sources published in literature. These RMs were derived from multi-frequency polarization observations. The RM data are compared to those in the NRAO VLA Sky Survey (NVSS) RM catalog. We reveal a systematic uncertainty of about 10.0 ± 1.5 rad m-2 in the NVSS RM catalog. The Galactic foreground RM is calculated through a weighted averaging method by using the compiled RM catalog together with the NVSS RM catalog, with careful consideration of uncertainties in the RM data. The data from the catalog and the interface for the Galactic foreground RM calculations are publicly available on the webpage: http://zmtt.bao.ac.cn/RM/.
Summary Report of the Workshop on The Experimental Nuclear Reaction Data Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semkova, V.; Pritychenko, B.
2014-12-01
The Workshop on the Experimental Nuclear Reaction Data Database (EXFOR) was held at IAEA Headquarters in Vienna from 6 to 10 October 2014. The workshop was organized to discuss various aspects of the EXFOR compilation process including compilation rules, different techniques for nuclear reaction data measurements, software developments, etc. A summary of the presentations and discussions that took place during the workshop is reported here.
Outgassing data for spacecraft materials
NASA Technical Reports Server (NTRS)
Campbell, W. A., Jr.; Marriott, R. S.; Park, J. J.
1980-01-01
A system for determining the mass loss in vacuum and for collecting the outgassed compounds was developed. Outgassing data, derived from tests at 398 K (125 degrees C) for 24 hours in vacuum as per ASTM E 59577, are compiled for numerous materials for spacecraft use. The data presented are the total mass loss (TML) and the collected volatile condensable materials (CVCM). The various materials are compiled by likely usage and alphabetically.
20 CFR 637.230 - Use of incentive bonuses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... in paragraph (d) of this section, technical assistance, data and information collection and compilation, management information systems, post-program followup activities, and research and evaluation... information collection and compilation, recordkeeping, or the preparation of applications for incentive...
Tectonic evaluation of the Nubian shield of Northeastern Sudan using thematic mapper imagery
NASA Technical Reports Server (NTRS)
1986-01-01
Bechtel is nearing completion of a one-year program that uses digitally enhanced LANDSAT Thematic Mapper (TM) data to compile the first comprehensive regional tectonic map of the Proterozoic Nubian Shield exposed in the northern Red Sea Hills of northeastern Sudan. The status of significant objectives of this study are given. Pertinent published and unpublished geologic literature and maps of the northern Red Sea Hills to establish the geologic framework of the region were reviewed. Thematic mapper imagery for optimal base-map enhancements was processed. Photo mosaics of enhanced images to serve as base maps for compilation of geologic information were completed. Interpretation of TM imagery to define and delineate structural and lithogologic provinces was completed. Geologic information (petrologic, and radiometric data) was compiled from the literature review onto base-map overlays. Evaluation of the tectonic evolution of the Nubian Shield based on the image interpretation and the compiled tectonic maps is continuing.
Mariano, John; Grauch, V.J.
1988-01-01
Aeromagnetic anomalies are produced by variations in the strength and direction of the magnetic field of rocks that include magnetic minerals, commonly magnetite. Patterns of anomalies on aeromagnetic maps can reveal structures - for example, faults which have juxtaposed magnetic rocks against non-magnetic rocks, or areas of alteration where magnetic minerals have been destroyed by hydrothermal activity. Tectonic features of regional extent may not become apparent until a number of aeromagnetic surveys have been compiled and plotted at the same scale. Commonly the compilation involves piecing together data from surveys that were flown at different times with widely disparate flight specifications and data reduction procedures. The data may be compiled into a composite map, where all the pieces are plotted onto one map without regard to the difference in flight elevation and datum, or they may be compiled into a merged map, where all survey data are analytically reduced to a common flight elevation and datum, and then digitally merged at the survey boundaries. The composite map retains the original resolution of all the survey data, but computer methods to enhance regional features crossing the survey boundaries may not be applied. On the other hand, computer methods can be applied to the merged data, but the accuracy of the data may be slightly diminished.
Low-Temperature Hydrothermal Resource Potential
Katherine Young
2016-06-30
Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.
Heat Capacity of Room-Temperature Ionic Liquids: A Critical Review
NASA Astrophysics Data System (ADS)
Paulechka, Yauheni U.
2010-09-01
Experimental data on heat capacity of room-temperature ionic liquids in the liquid state were compiled and critically evaluated. The compilation contains data for 102 aprotic ionic liquids from 63 literature references and covers the period of time from 1998 through the end of February 2010. Parameters of correlating equations for temperature dependence of the heat capacities were developed.
Occupation and Skill Change in the European Retail Sector. A Study for CECD and EURO-FIET.
ERIC Educational Resources Information Center
Spilsbury, Mark; And Others
A study examined occupational and skill change within the retail sector in Europe so that training and development schemes can be put in place. Data were collected in the following ways: compilation of information on the level of employment, skills, and training in the retail sector of European countries; compilation of national data on…
Establishing Malware Attribution and Binary Provenance Using Multicompilation Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramshaw, M. J.
2017-07-28
Malware is a serious problem for computer systems and costs businesses and customers billions of dollars a year in addition to compromising their private information. Detecting malware is particularly difficult because malware source code can be compiled in many different ways and generate many different digital signatures, which causes problems for most anti-malware programs that rely on static signature detection. Our project uses a convolutional neural network to identify malware programs but these require large amounts of data to be effective. Towards that end, we gather thousands of source code files from publicly available programming contest sites and compile themmore » with several different compilers and flags. Building upon current research, we then transform these binary files into image representations and use them to train a long-term recurrent convolutional neural network that will eventually be used to identify how a malware binary was compiled. This information will include the compiler, version of the compiler and the options used in compilation, information which can be critical in determining where a malware program came from and even who authored it.« less
The Health Impact Assessment (HIA) Resource and Tool Compilation
The compilation includes tools and resources related to the HIA process and can be used to collect and analyze data, establish a baseline profile, assess potential health impacts, and establish benchmarks and indicators for monitoring and evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, M.A.
1997-04-01
This compilation is the annual cumulation of bibliographic data and abstracts for the formal regulatory and technical reports issued by the U.S. Nuclear Regulatory Commission (NRC) Staff and its contractors.
Cables and connectors: A compilation
NASA Technical Reports Server (NTRS)
1974-01-01
A technological compilation on devices and techniques for various types of electrical cables and connections is presented. Data are reported under three sections: flat conductor cable technology, newly developed electrical connectors, and miscellaneous articles and information on cables and connector techniques.
Description of water-resource-related data compiled for Reno County, south-central Kansas
Hansen, C.V.
1993-01-01
Water-resource-related data for sites in Reno County, Kansas were compiled in cooperation with the Reno County Health Department as part of the Kansas Department of Health and Environment's Local Environmental Protection Program (LEPP). These data were entered into a relational data-base management system (RDBMS) to facilitate the spatial analysis required to meet the LEPP goals of developing plans for nonpoint-source management and for public- water-supply protection. The data in the RDBMS are organized into digital data sets. The data sets contain the water-resource-related data compiled by the U.S. Geological Survey for 958 wells; by the Kansas Department of Health and Environment for 3,936 wells; by the Kansas Department of Health and Environment for 51 wells, 18 public-water-supply distribution systems, and 7 streams; by the Kansas State Board of Agriculture for 643 wells and 23 streams or surface-water impoundments; and by well-drilling contractors and the Kansas Geological Survey for 96 wells. The data in these five data sets are available from the Reno County Health Department in Hutchinson, Kansas. (USGS)
NASA Astrophysics Data System (ADS)
Arndt, Jan Erik; Schenke, Hans Werner; Jakobsson, Martin; Nitsche, Frank O.; Buys, Gwen; Goleby, Bruce; Rebesco, Michele; Bohoyo, Fernando; Hong, Jongkuk; Black, Jenny; Greku, Rudolf; Udintsev, Gleb; Barrios, Felipe; Reynoso-Peralta, Walter; Taisei, Morishita; Wigley, Rochelle
2013-06-01
International Bathymetric Chart of the Southern Ocean (IBCSO) Version 1.0 is a new digital bathymetric model (DBM) portraying the seafloor of the circum-Antarctic waters south of 60°S. IBCSO is a regional mapping project of the General Bathymetric Chart of the Oceans (GEBCO). The IBCSO Version 1.0 DBM has been compiled from all available bathymetric data collectively gathered by more than 30 institutions from 15 countries. These data include multibeam and single-beam echo soundings, digitized depths from nautical charts, regional bathymetric gridded compilations, and predicted bathymetry. Specific gridding techniques were applied to compile the DBM from the bathymetric data of different origin, spatial distribution, resolution, and quality. The IBCSO Version 1.0 DBM has a resolution of 500 × 500 m, based on a polar stereographic projection, and is publicly available together with a digital chart for printing from the project website (www.ibcso.org) and at
Shear-wave velocity compilation for Northridge strong-motion recording sites
Borcherdt, Roger D.; Fumal, Thomas E.
2002-01-01
Borehole and other geotechnical information collected at the strong-motion recording sites of the Northridge earthquake of January 17, 1994 provide an important new basis for the characterization of local site conditions. These geotechnical data, when combined with analysis of strong-motion recordings, provide an empirical basis to evaluate site coefficients used in current versions of US building codes. Shear-wave-velocity estimates to a depth of 30 meters are derived for 176 strong-motion recording sites. The estimates are based on borehole shear-velocity logs, physical property logs, correlations with physical properties and digital geologic maps. Surface-wave velocity measurements and standard penetration data are compiled as additional constraints. These data as compiled from a variety of databases are presented via GIS maps and corresponding tables to facilitate use by other investigators.
Time-resolved x-ray scattering instrumentation
Borso, C.S.
1985-11-21
An apparatus and method for increased speed and efficiency of data compilation and analysis in real time is presented in this disclosure. Data is sensed and grouped in combinations in accordance with predetermined logic. The combinations are grouped so that a simplified reduced signal results, such as pairwise summing of data values having offsetting algebraic signs, thereby reducing the magnitude of the net pair sum. Bit storage requirements are reduced and speed of data compilation and analysis is increased by manipulation of shorter bit length data values, making real time evaluation possible.
COMPILATION OF CONVERSION COEFFICIENTS FOR THE DOSE TO THE LENS OF THE EYE.
Behrens, R
2017-04-28
A compilation of fluence-to-absorbed dose conversion coefficients for the dose to the lens of the eye is presented. The compilation consists of both previously published data and newly calculated values: photon data (5 keV-50 MeV for both kerma approximation and full electron transport), electron data (10 keV-50 MeV), and positron data (1 keV-50 MeV) - neutron data will be published separately. Values are given for angles of incidence from 0° up to 90° in steps of 15° and for rotational irradiation. The data presented can be downloaded from this article's website and they are ready for use by Report Committee (RC) 26. This committee has been set up by the International Commission on Radiation Units and Measurements (ICRU) and is working on a 'proposal for a redefinition of the operational quantities for external radiation exposure'. © The Author 2016. Published by Oxford University Press.
Naumova, Vera V.; Patuk, Mikhail I.; Kapitanchuk, Marina Yu.; Nokleberg, Warren J.; Khanchuk, Alexander I.; Parfenov, Leonid M.; Rodionov, Sergey M.; Miller, Robert J.; Diggles, Michael F.
2006-01-01
This is the online version of a CD-ROM publication. It contains all of the data that are on the disc but extra files have been removed: index files, software installers, and Windows autolaunch files. The purpose of this publication is to provide a high-quality spatial data compilation (Geographical Information System or GIS) of geodynamic, mineral deposit, and metallogenic belt maps, and descriptive data for Northeast Asia for customers and users. This area consists of Eastern Siberia, Russian Far East, Mongolia, northern China, South Korea, and Japan. The GIS compilation contains integrated spatial data for: (1) a geodynamics map at a scale of 1:5,000,000; (2) a mineral deposit location map; (3) metallogenic belt maps; (4) detailed descriptions of geologic units, including tectonostratigraphic terranes, cratons, major melange zones, and overlap assemblages, with references; (5) detailed descriptions of metallogenic belts with references; (6) detailed mineral deposit descriptions with references; and (7) page-size stratigraphic columns for major terranes.
Compilation of the data-base of the star catalogue by ADABAS.
NASA Astrophysics Data System (ADS)
Ishikawa, T.
A data-base of the FK4 Star Catalogue is compiled by using HITAC M-280H in the Computer Center of Tokyo University and a commercial data-base management system (DBMS) ADABAS. The purpose of this attempt is to examine whether the ADABAS, which could be regarded as a representative of the currently available DBMS's developed majorly for business and information retrieval purposes, proves itself useful for handling mass numerical data like the star catalogue data. It is concluded that the data-base could really be a convenient way for storing and utilizing the star catalogue data.
Martian Lobate Debris Aprons: Compilation of a New GIS-Based Global Map
NASA Astrophysics Data System (ADS)
Chuang, F. C.; Crown, D. A.; Berman, D. C.; Skinner, J. A.; Tanaka, K. L.
2011-03-01
Compilation of a new GIS-based global map of lobate debris aprons is underway to better understand the global inventory of these relict ice-rich features. We welcome contributions of GIS-based data from other investigators.
Status and future of extraterrestrial mapping programs
NASA Technical Reports Server (NTRS)
Batson, R. M.
1981-01-01
Extensive mapping programs have been completed for the Earth's Moon and for the planet Mercury. Mars, Venus, and the Galilean satellites of Jupiter (Io, Europa, Ganymede, and Callisto), are currently being mapped. The two Voyager spacecraft are expected to return data from which maps can be made of as many as six of the satellites of Saturn and two or more of the satellites of Uranus. The standard reconnaissance mapping scales used for the planets are 1:25,000,000 and 1:5,000,000; where resolution of data warrants, maps are compiled at the larger scales of 1:2,000,000, 1:1,000,000 and 1:250,000. Planimetric maps of a particular planet are compiled first. The first spacecraft to visit a planet is not designed to return data from which elevations can be determined. As exploration becomes more intensive, more sophisticated missions return photogrammetric and other data to permit compilation of contour maps.
System Data Model (SDM) Source Code
2012-08-23
CROSS_COMPILE=/opt/gumstix/build_arm_nofpu/staging_dir/bin/arm-linux-uclibcgnueabi- 8 : CC=$(CROSS_COMPILE)gcc 9: CXX=$(CROSS_COMPILE)g++ 10 : AR...and flags to pass to it 6: LEX=flex 7: LEXFLAGS=-B 8 : 9: ## The parser generator to invoke and flags to pass to it 10 : YACC=bison 11: YACCFLAGS...5: # Point to default PetaLinux root directory 6: ifndef ROOTDIR 7: ROOTDIR=$(PETALINUX)/software/petalinux-dist 8 : endif 9: 10 : PATH:=$(PATH
Low-Temperature Hydrothermal Resource Potential Estimate
Katherine Young
2016-06-30
Compilation of data (spreadsheet and shapefiles) for several low-temperature resource types, including isolated springs and wells, delineated area convection systems, sedimentary basins and coastal plains sedimentary systems. For each system, we include estimates of the accessible resource base, mean extractable resource and beneficial heat. Data compiled from USGS and other sources. The paper (submitted to GRC 2016) describing the methodology and analysis is also included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
John McCord
2007-09-01
This report documents transport data and data analyses for Yucca Flat/Climax Mine CAU 97. The purpose of the data compilation and related analyses is to provide the primary reference to support parameterization of the Yucca Flat/Climax Mine CAU transport model. Specific task objectives were as follows: • Identify and compile currently available transport parameter data and supporting information that may be relevant to the Yucca Flat/Climax Mine CAU. • Assess the level of quality of the data and associated documentation. • Analyze the data to derive expected values and estimates of the associated uncertainty and variability. The scope of thismore » document includes the compilation and assessment of data and information relevant to transport parameters for the Yucca Flat/Climax Mine CAU subsurface within the context of unclassified source-term contamination. Data types of interest include mineralogy, aqueous chemistry, matrix and effective porosity, dispersivity, matrix diffusion, matrix and fracture sorption, and colloid-facilitated transport parameters.« less
Chernoff, Carlotta B.; Orris, G.J.
2002-01-01
An inventory of more than 1,600 world phosphate mines, deposits, and occurrences was compiled from smaller data sets collected as part of multiple research efforts by Carlotta Chernoff, University of Arizona, and Greta Orris, U.S. Geological Survey. These data have been utilized during studies of black shale depositional environments and to construct phosphate deposit models. The compiled data have been edited for consistency and additional location information has been added where possible. The database of compiled phosphate information is being released in two sections; the geologic data in one section and the location and mineral economic data in the second. This report, U.S. Geological Survey Open-File Report 02–156–A, contains the geologic data and is best used with the complimentary data contained in Open-File Report 02–156–B. U.S. Geological Survey Open-File Report 02–156–B contains commodity data, location and analytical data, a variety of mineral economic data, reference information, and pointers to related records in the U.S. Geological Survey National mineral databases—MASMILS and MRDS.
Reaeration equations derived from U.S. geological survey database
Melching, C.S.; Flores, H.E.
1999-01-01
Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.
Scalable and Accurate SMT-Based Model Checking of Data Flow Systems
2013-10-31
accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...be accessed from C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of...C, C++, Java, and OCaml , and provisions have been made to support other languages . CVC4 can be compiled and run on various flavors of Linux, Mac OS
1987-06-21
AD-*I93 60 A (T~r~ OIIP~~~T l UNCL~t Uh 1&0-- is Imf, FILE COPY 0o AVF Control Number: AVF-VSR-100.1087087-04-09- VRX Ada® COMPILER VALIDATION SUMMARY...UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (When Data Entered) AVF Control Number: AVF-VSR-100.0987 87-04-09- VRX Ada ® COMPILER VALIDATION SUMMARY REPORT
HAL/S - The programming language for Shuttle
NASA Technical Reports Server (NTRS)
Martin, F. H.
1974-01-01
HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audi, G., E-mail: amdc.audi@gmail.com; Blaum, K.; Block, M.
In order to avoid errors and confusion that may arise from the recent publication of a paper entitled “Atomic Mass Compilation 2012”, we explain the important difference between a compilation and an evaluation; the former is a necessary but insufficient condition for the latter. The simple list of averaged mass values offered by the “Atomic Mass Compilation” uses none of the numerous links and correlations present in the large body of input data that are carefully maintained within the “Atomic Mass Evaluation”. As such, the mere compilation can only produce results of inferior accuracy. Illustrative examples are given.
Snyder, Stephen L.; Geister, Daniel W.; Daniels, David L.; Ervin, C. Patrick
2004-01-01
Principal facts for 40,488 gravity stations covering the entire state of Wisconsin are presented here in digital form. This is a compilation of previously published data collected between 1948 and 1992 from numerous sources, along with over 10,000 new gravity stations collected by the USGS since 1999. Also included are 550 gravity stations from previously unpublished sources. Observed gravity and complete-Bouguer gravity anomaly data for this statewide compilation are included here. Altogether, 14 individual surveys are presented here.
NASA Technical Reports Server (NTRS)
Warren, Wayne H., Jr.
1989-01-01
The machine readable version of the compilation, as it is currently being distributed from the Astronomical Data Center, is described. The catalog contains redshifts and velocity dispersions for all Abell clusters for which these data had been published up to 1986 July. Also included are 1950 equatorial coordinates for the centers of the listed clusters, numbers of observations used to determine the redshifts, and bibliographical references citing the data sources.
Compile-time estimation of communication costs in multicomputers
NASA Technical Reports Server (NTRS)
Gupta, Manish; Banerjee, Prithviraj
1991-01-01
An important problem facing numerous research projects on parallelizing compilers for distributed memory machines is that of automatically determining a suitable data partitioning scheme for a program. Any strategy for automatic data partitioning needs a mechanism for estimating the performance of a program under a given partitioning scheme, the most crucial part of which involves determining the communication costs incurred by the program. A methodology is described for estimating the communication costs at compile-time as functions of the numbers of processors over which various arrays are distributed. A strategy is described along with its theoretical basis, for making program transformations that expose opportunities for combining of messages, leading to considerable savings in the communication costs. For certain loops with regular dependences, the compiler can detect the possibility of pipelining, and thus estimate communication costs more accurately than it could otherwise. These results are of great significance to any parallelization system supporting numeric applications on multicomputers. In particular, they lay down a framework for effective synthesis of communication on multicomputers from sequential program references.
Aeromagnetic map compilation: Procedures for merging and an example from Washington
Finn, C.
1999-01-01
Rocks in Antarctica and offshore have widely diverse magnetic properties. Consequently, aeromagnetic data collected there can improve knowledge of the geologic, tectonic and geothermal characteristics of the region. Aeromagnetic data can map concealed structures such as faults, folds and dikes, ascertain basin thickness and locate buried volcanic, as well as some intrusive and metamorphic rocks. Gridded, composite data sets allow a view of continental-scale trends that individual data sets do not provide and link widely-separated areas of outcrop and disparate geologic studies. Individual magnetic surveys must be processed so that they match adjacent surveys prior to merging. A consistent representation of the Earth's magnetic field (International Geomagnetic Reference Field (IGRF)) must be removed from each data set. All data sets need to be analytically continued to the same flight elevation with their datums shifted to match adjacent data. I advocate minimal processing to best represent the individual surveys in the merged compilation. An example of a compilation of aeromagnetic surveys from Washington illustrates the utility of aeromagnetic maps for providing synoptic views of regional tectonic features.
VizieR Online Data Catalog: X-ray supernova remnants in LMC (Maggi+, 2016)
NASA Astrophysics Data System (ADS)
Maggi, P.; Haberl, F.; Kavanagh, P. J.; Sasaki, M.; Bozzetto, L. M.; Filipovic, M. D.; Vasilopoulos, G.; Pietsch, W.; Points, S. D.; Chu, Y.-H.; Dickel, J.; Ehle, M.; Williams, R.; Greiner, J.
2016-03-01
The processing of all available XMM-Newton data in the LMC region, and those of the VLP survey in particular, was done with the data reduction pipeline developed in our research group over several years. Various non-X-ray data were used to supplement the XMM-Newton observations. They allow us to assess e.g. the relation between the population of SNRs and large scale structure of the LMC, or to evaluate doubtful candidates in the sample compilation. We compiled a sample of 59 definite SNRs, cleaned of misclassified objects and doubtful candidates. (2 data files).
Compiler analysis for irregular problems in FORTRAN D
NASA Technical Reports Server (NTRS)
Vonhanxleden, Reinhard; Kennedy, Ken; Koelbel, Charles; Das, Raja; Saltz, Joel
1992-01-01
We developed a dataflow framework which provides a basis for rigorously defining strategies to make use of runtime preprocessing methods for distributed memory multiprocessors. In many programs, several loops access the same off-processor memory locations. Our runtime support gives us a mechanism for tracking and reusing copies of off-processor data. A key aspect of our compiler analysis strategy is to determine when it is safe to reuse copies of off-processor data. Another crucial function of the compiler analysis is to identify situations which allow runtime preprocessing overheads to be amortized. This dataflow analysis will make it possible to effectively use the results of interprocedural analysis in our efforts to reduce interprocessor communication and the need for runtime preprocessing.
Compilation of 3D global conductivity model of the Earth for space weather applications
NASA Astrophysics Data System (ADS)
Alekseev, Dmitry; Kuvshinov, Alexey; Palshin, Nikolay
2015-07-01
We have compiled a global three-dimensional (3D) conductivity model of the Earth with an ultimate goal to be used for realistic simulation of geomagnetically induced currents (GIC), posing a potential threat to man-made electric systems. Bearing in mind the intrinsic frequency range of the most intense disturbances (magnetospheric substorms) with typical periods ranging from a few minutes to a few hours, the compiled 3D model represents the structure in depth range of 0-100 km, including seawater, sediments, earth crust, and partly the lithosphere/asthenosphere. More explicitly, the model consists of a series of spherical layers, whose vertical and lateral boundaries are established based on available data. To compile a model, global maps of bathymetry, sediment thickness, and upper and lower crust thicknesses as well as lithosphere thickness are utilized. All maps are re-interpolated on a common grid of 0.25×0.25 degree lateral spacing. Once the geometry of different structures is specified, each element of the structure is assigned either a certain conductivity value or conductivity versus depth distribution, according to available laboratory data and conversion laws. A numerical formalism developed for compilation of the model, allows for its further refinement by incorporation of regional 3D conductivity distributions inferred from the real electromagnetic data. So far we included into our model four regional conductivity models, available from recent publications, namely, surface conductance model of Russia, and 3D conductivity models of Fennoscandia, Australia, and northwest of the United States.
Zuellig, Robert E.; Heinold, Brian D.; Kondratieff, Boris C.; Ruiter, David E.
2012-01-01
The U.S. Geological Survey, in cooperation with the C.P. Gillette Museum of Arthropod Diversity (Colorado State University, Fort Collins, Colorado), compiled collection record data to document the historical and present-day occurrence of mayfly, stonefly, and caddisfly species in the South Platte River Basin. Data were compiled from records collected between 1873 and 2010 to identify where regional knowledge about species occurrence in the basin is lacking and to encourage future researchers to locate additional populations of these poorly understood but very important organisms. This report provides a description of how data were compiled, a map of approximate collection locations, a listing of the most recent collection records from unique locations, general remarks for each species, a species list with selected summary information, and distribution maps of species collection records.
NASA Technical Reports Server (NTRS)
Wier, C. E.; Wobber, F. J. (Principal Investigator); Russell, O. R.; Amato, R. V.
1972-01-01
The author has identified the following significant results. Various data compilation and analysis activities in support of ERTS-1 imagery interpretation are in progress or are completed. These include the compilation of mine accident data, areas of mine roof instability and the analysis of high altitude color infrared photography and low altitude color and color infrared photography which was acquired by NASA in support of the project. The photography reveals that many fracture lineaments are detectable through a varied thickness of glacial till. These data will be compiled on a series of 1:250,000 scale base maps and evaluated for a correlation between fracture zones and mine accidents and rooffalls. Due to high occurrence of cloud cover in the project area and to the delay in imagery shipments, little progress has been made in the analysis of ERTS-1 imagery.
Compilation of DNA sequences of Escherichia coli (update 1991)
Kröger, Manfred; Wahl, Ralf; Rice, Peter
1991-01-01
We have compiled the DNA sequence data for E.coli available from the GENBANK and EMBL data libraries and over a period of several years independently from the literature. This is the third listing replacing and increasing the former listing roughly by one fifth. However, in order to save space this printed version contains DNA sequence information only. The complete compilation is now available in machine readable form from the EMBL data library (ECD release 6). After deletion of all detected overlaps a total of 1 492 282 individual bp is found to be determined till the beginning of 1991. This corresponds to a total of 31.62% of the entire E.coli chromosome consisting of about 4,720 kbp. This number may actually be higher by some extra 2,5% derived from lysogenic bacteriophage lambda and various DNA sequences already received for statistical purposes only. PMID:2041799
Simulation and analysis of support hardware for multiple instruction rollback
NASA Technical Reports Server (NTRS)
Alewine, Neil J.
1992-01-01
Recently, a compiler-assisted approach to multiple instruction retry was developed. In this scheme, a read buffer of size 2N, where N represents the maximum instruction rollback distance, is used to resolve one type of data hazard. This hardware support helps to reduce code growth, compilation time, and some of the performance impacts associated with hazard resolution. The 2N read buffer size requirement of the compiler-assisted approach is worst case, assuring data redundancy for all data required but also providing some unnecessary redundancy. By adding extra bits in the operand field for source 1 and source 2 it becomes possible to design the read buffer to save only those values required, thus reducing the read buffer size requirement. This study measures the effect on performance of a DECstation 3100 running 10 application programs using 6 read buffer configurations at varying read buffer sizes.
Estimated water use, by county, in North Carolina, 1990
Terziotti, Silvia; Schrader, Tony P.; Treece, M.W.
1994-01-01
Data on water use in North Carolina were compiled for 1990 as part of a cooperative agreement between the U.S. Geological Survey and the Division of Water Resources of the North Carolina Department of Environment, Health, and Natural Resources. Data were compiled from a number of Federal, State, and private sources for the offstream water-use categories of public supply, domestic, commercial, industrial, mining, livestock, irrigation, and thermoelectric-power generation. Data also were collected for instream use from hydroelectric facilities. Total estimated offstream water use in the State for 1990 was about 8,940 million gallons per day. About 95 percent of the water withdrawn was from surface-water sources. Thermoelectric-power generation accounted for about 81 percent of all withdrawals. Data for instream water use for hydroelectric-power generation also were compiled. This instream water use totaled about 66,900 million gallons per day. eAch water-use category is summarized in this report by county and source of water supply.
Karl, Susan M.; Blodgett, R.B.; Labay, Keith A.; Box, S.E.; Bradley, D.C.; Miller, M.L.; Wallace, W.K.; Baichtal, J.F.
2011-01-01
Information about fossils collected by U.S. Geological Survey, State of Alaska, academic, and industry geologists that have been reported in literature or archived in reports from the former U.S. Geological Survey Branch of Paleontology and Stratigraphy are compiled on a plate and table in this report to provide comprehensive paleontologic age data for the Taylor Mountains quadrangle area in southwestern Alaska. The reports used to compile the table in this report were submitted by recognized paleontologic experts. Some of the information is derived from reports that date back almost 100 years. Many of the data are available in more detail in the Alaska Paleontological Database (http://www.alaskafossil.org/). The 287 entries in this table are shown on the accompanying plate, on which symbols representing the entries are color-coded by geologic age. This report represents the most comprehensive and most recently updated compilation of paleontologic data for this area.
OSCAR API for Real-Time Low-Power Multicores and Its Performance on Multicores and SMP Servers
NASA Astrophysics Data System (ADS)
Kimura, Keiji; Mase, Masayoshi; Mikami, Hiroki; Miyamoto, Takamichi; Shirako, Jun; Kasahara, Hironori
OSCAR (Optimally Scheduled Advanced Multiprocessor) API has been designed for real-time embedded low-power multicores to generate parallel programs for various multicores from different vendors by using the OSCAR parallelizing compiler. The OSCAR API has been developed by Waseda University in collaboration with Fujitsu Laboratory, Hitachi, NEC, Panasonic, Renesas Technology, and Toshiba in an METI/NEDO project entitled "Multicore Technology for Realtime Consumer Electronics." By using the OSCAR API as an interface between the OSCAR compiler and backend compilers, the OSCAR compiler enables hierarchical multigrain parallel processing with memory optimization under capacity restriction for cache memory, local memory, distributed shared memory, and on-chip/off-chip shared memory; data transfer using a DMA controller; and power reduction control using DVFS (Dynamic Voltage and Frequency Scaling), clock gating, and power gating for various embedded multicores. In addition, a parallelized program automatically generated by the OSCAR compiler with OSCAR API can be compiled by the ordinary OpenMP compilers since the OSCAR API is designed on a subset of the OpenMP. This paper describes the OSCAR API and its compatibility with the OSCAR compiler by showing code examples. Performance evaluations of the OSCAR compiler and the OSCAR API are carried out using an IBM Power5+ workstation, an IBM Power6 high-end SMP server, and a newly developed consumer electronics multicore chip RP2 by Renesas, Hitachi and Waseda. From the results of scalability evaluation, it is found that on an average, the OSCAR compiler with the OSCAR API can exploit 5.8 times speedup over the sequential execution on the Power5+ workstation with eight cores and 2.9 times speedup on RP2 with four cores, respectively. In addition, the OSCAR compiler can accelerate an IBM XL Fortran compiler up to 3.3 times on the Power6 SMP server. Due to low-power optimization on RP2, the OSCAR compiler with the OSCAR API achieves a maximum power reduction of 84% in the real-time execution mode.
NASA Astrophysics Data System (ADS)
Spencer, S.; Ogle, S. M.; Wirth, T. C.; Sivakami, G.
2016-12-01
The Intergovernmental Panel on Climate Change (IPCC) provides methods and guidance for estimating anthropogenic greenhouse gas emissions for reporting to the United Nations Framework Convention on Climate Change. The methods are comprehensive and require extensive data compilation, management, aggregation, documentation and calculations of source and sink categories to achieve robust emissions estimates. IPCC Guidelines describe three estimation tiers that require increasing levels of country-specific data and method complexity. Use of higher tiers should improve overall accuracy and reduce uncertainty in estimates. The AFOLU sector represents a complex set of methods for estimating greenhouse gas emissions and carbon sinks. Major AFOLU emissions and sinks include carbon dioxide (CO2) from carbon stock change in biomass, dead organic matter and soils, urea or lime application to soils, and oxidation of carbon in drained organic soils; nitrous oxide (N2O) and methane (CH4) emissions from livestock management and biomass burning; N2O from organic amendments and fertilizer application to soils, and CH4 emissions from rice cultivation. To assist inventory compilers with calculating AFOLU-sector estimates, the Agriculture and Land Use Greenhouse Gas Inventory Tool (ALU) was designed to implement Tier 1 and 2 methods using IPCC Good Practice Guidance. It guides the compiler through activity data entry, emission factor assignment, and emissions calculations while carefully maintaining data integrity. ALU also provides IPCC defaults and can estimate uncertainty. ALU was designed to simplify the AFOLU inventory compilation process at regional or national scales, disaggregating the process into a series of steps reduces the potential for errors in the compilation process. An example application has been developed using ALU to estimate methane emissions from rice production in the United States.
Nitrogen and phosphorus data for surface water in the Upper Colorado River basin, Colorado, 1980-94
Wynn, K.H.; Spahr, N.E.
1997-01-01
This report documents, summarizes, and provides on 3.5-in. diskette the surface-water data collected from January 1980 through August 1994 for nitrogen and phosphorus in the Upper Colorado River Basin from the Colorado-Utah State line to the Continental Divide. Ancillary data for parameters, such as water temperature, streamflow, specific conductance, dissolved oxygen, pH, and alkalinity, also are compiled, if available. Data were retrieved from the U.S. Geological Survey National Water Information System and the U.S. Environmental Protection Agency STORET (STOrage and RETrieval) system. The water-quality data are presented for sites having five or more nutrient analyses that reflect ambient stream conditions. The compiled data base contains 4,927 samples from 123 sites. The median sample period of record for individual sites is 2.5 years, and the seventy-fifth percentile is about 12 years. Sixteen sites have only five samples each. The median number of samples per site is 14 samples, whereas the seventy-fifth percentile is 65 samples. The compiled data set was used in the design of a basinwide sampling network that incorporates sites that lack historic surface-water-quality data.
Data for Regional Heat flow Studies in and around Japan and its relationship to seismogenic layer
NASA Astrophysics Data System (ADS)
Tanaka, A.
2017-12-01
Heat flow is a fundamental parameter to constrain the thermal structure of the lithosphere. It also provides a constraint to lithospheric rheology, which is sensitive to temperature. General features of the heat flow distribution in and around Japan had been revealed by the early 1970's, and heat flow data have been continuously updated by further data compilation from mainly published data and investigations. These include additional data, which were not published individually, but were included in site-specific reports. Also, thermal conductivity measurements were conducted on cores from boreholes using a line-source device with a half-space type box probe and an optical scanning device, and previously unpublished thermal conductivities were compiled. It has been more than 10 years since the last published compilation and analysis of heat flow data of Tanaka et al. (2004), which published all of the heat flow data in the northwestern Pacific area (from 0 to 60oN and from 120 to 160oE) and geothermal gradient data in and around Japan. Because these added data and information are drawn from various sources, the updated database is compiled in each datasets: heat flow, geothermal gradient, and thermal conductivity. The updated and improved database represents considerable improvement to past updates and presents an opportunity to revisit the thermal state of the lithosphere along with other geophysical/geochemical constraints on heat flow extrapolation. The spatial distribution of the cut-off depth of shallow seismicity of Japan using relocated hypocentres during the last decade (Omuralieva et al., 2012) and this updated database are used to quantify the concept of temperature as a fundamental parameter for determining the seismogenic thickness.
Computer programs: Information retrieval and data analysis, a compilation
NASA Technical Reports Server (NTRS)
1972-01-01
The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.
mmpdb: An Open-Source Matched Molecular Pair Platform for Large Multiproperty Data Sets.
Dalke, Andrew; Hert, Jérôme; Kramer, Christian
2018-05-29
Matched molecular pair analysis (MMPA) enables the automated and systematic compilation of medicinal chemistry rules from compound/property data sets. Here we present mmpdb, an open-source matched molecular pair (MMP) platform to create, compile, store, retrieve, and use MMP rules. mmpdb is suitable for the large data sets typically found in pharmaceutical and agrochemical companies and provides new algorithms for fragment canonicalization and stereochemistry handling. The platform is written in Python and based on the RDKit toolkit. It is freely available from https://github.com/rdkit/mmpdb .
VizieR Online Data Catalog: Butterfly diagram wings (Leussu+, 2017)
NASA Astrophysics Data System (ADS)
Leussu, R.; Usoskin, I. G.; Senthamizh Pavai, V.; Diercke, A.; Arlt, R.; Mursula, K.
2016-11-01
fig1data.dat contains the separated wings in a butterfly diagram for sunspot groups from three different origins: Sunspot observations by S.H. Schwabe and G. Spoerer, and the RGO/SOON compilation. The latitudes for sunspot groups from the Schwabe and Spoerer data are given as size-weighted averages from sunspots belonging to each group. Latitudes for the RGO compilation are given as they are stated in the original data. The columns report the year, month, day, date [yr], latitude [deg], cycle, hemisphere, and data set tag. Northern hemisphere wings are tagged with "1" and southern hemisphere wings with "2". The data set tag is "1" for Schwabe data, "2" for Spoerer data and "3" for RGO data. (1 data file).
The National Geographic Names Data Base: Phase II instructions
Orth, Donald J.; Payne, Roger L.
1987-01-01
not recorded on topographic maps be added. The systematic collection of names from other sources, including maps, charts, and texts, is termed Phase II. In addition, specific types of features not compiled during Phase I are encoded and added to the data base. Other names of importance to researchers and users, such as historical and variant names, are also included. The rules and procedures for Phase II research, compilation, and encoding are contained in this publication.
Energygrams: Brief descriptions of energy technology
NASA Astrophysics Data System (ADS)
Simpson, W. F., Jr.
This compilation of technical notes (called Energygrams) is published by the Technical Information Center. Energygrams are usually one-page illustrated bulletins describing DOE technology or data and telling how to obtain the technical reports or other material on which they are based. Frequently a personal contact is given who can provide program information in addition to the data found in the reports. The compilation is organized by subject categories, and, within each category, Energygrams are presented alphabetically by Energygram title.
The Effect of Resolution on Detecting Visually Salient Preattentive Features
2015-06-01
resolutions in descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data...to particular regions in a scene by highly salient 2 features, for example, the color of the flower discussed in the previous example. These...descending order (a–e). The plot compiles the areas of interest displayed in the images and each symbol represents 1 of the images. Data clusters
Taylor, Charles J.; Nelson, Hugh L.
2008-01-01
Geospatial data needed to visualize and evaluate the hydrogeologic framework and distribution of karst features in the Interior Low Plateaus physiographic region of the central United States were compiled during 2004-2007 as part of the Ground-Water Resources Program Karst Hydrology Initiative (KHI) project. Because of the potential usefulness to environmental and water-resources regulators, private consultants, academic researchers, and others, the geospatial data files created during the KHI project are being made available to the public as a provisional regional karst dataset. To enhance accessibility and visualization, the geospatial data files have been compiled as ESRI ArcReader data folders and user interactive Published Map Files (.pmf files), all of which are catalogued by the boundaries of surface watersheds using U.S. Geological Survey (USGS) eight-digit hydrologic unit codes (HUC-8s). Specific karst features included in the dataset include mapped sinkhole locations, sinking (or disappearing) streams, internally drained catchments, karst springs inventoried in the USGS National Water Information System (NWIS) database, relic stream valleys, and karst flow paths obtained from results of previously reported water-tracer tests.
Huff, Julia A.; Clark, Dennis A.; Martin, Peter
2002-01-01
Lithologic and ground-water data were collected at 85 monitoring sites constructed in the Mojave Water Agency Management area in San Bernardino County, California, as part of a series of cooperative studies between the U.S. Geological Survey and the Mojave Water Agency. The data are being used to evaluate and address water-supply and water-quality issues. This report presents a compilation of the data collected at these sites from 1992 through 1998, including location and design of the monitoring sites, lithologic data, geophysical logs, ground-water-level measurements, and water-quality analyses.One to five small (generally 2-inch) diameter wells were installed at each of the 85 monitoring sites to collect depth-dependent hydrologic data. Lithologic logs were compiled from descriptions of drill cuttings collected at each site and from observations recorded during the drilling of the borehole. Generalized stratigraphic columns were compiled by grouping similar lithologic units. Geophysical logs provide information on the character of the lithologic units and on the presence of ground water and the chemical characteristics of that water. Water-level and water-quality data were collected periodically from the sites during 1992 through 1998.
Compilation of historical water-quality data for selected springs in Texas, by ecoregion
Heitmuller, Franklin T.; Williams, Iona P.
2006-01-01
Springs are important hydrologic features in Texas. A database of about 2,000 historically documented springs and available spring-flow measurements previously has been compiled and published, but water-quality data remain scattered in published sources. This report by the U.S. Geological Survey, in cooperation with the Texas Parks and Wildlife Department, documents the compilation of data for 232 springs in Texas on the basis of a set of criteria and the development of a water-quality database for the selected springs. The selection of springs for compilation of historical water-quality data in Texas was made using existing digital and hard-copy data, responses to mailed surveys, selection criteria established by various stakeholders, geographic information systems, and digital database queries. Most springs were selected by computing the highest mean spring flows for each Texas level III ecoregion. A brief assessment of the water-quality data for springs in Texas shows that few data are available in the Arizona/New Mexico Mountains, High Plains, East Central Texas Plains, Western Gulf Coastal Plain, and South Central Plains ecoregions. Water-quality data are more abundant for the Chihuahuan Deserts, Edwards Plateau, and Texas Blackland Prairies ecoregions. Selected constituent concentrations in Texas springs, including silica, calcium, magnesium, sodium, potassium, strontium, sulfate, chloride, fluoride, nitrate (nitrogen), dissolved solids, and hardness (as calcium carbonate) are comparatively high in the Chihuahuan Deserts, Southwestern Tablelands, Central Great Plains, and Cross Timbers ecoregions, mostly as a result of subsurface geology. Comparatively low concentrations of selected constituents in Texas springs are associated with the Arizona/New Mexico Mountains, Southern Texas Plains, East Central Texas Plains, and South Central Plains ecoregions.
Usefulness of Compile-Time Restructuring of LGDF Programs in Throughput- Critical Applications
1993-09-01
efficiency of the sufers . Ma overhead can be reduced effecively by using the node and an: attributes of the data flow graph at ccunpie-time to...intolerable delays and insufficient buffer space, especiall underbhigh loads. A. THESIS SCOPE AND CONTRIB~tMON The focus of this work is on compile-time
Recently published protein sequences. I.
NASA Technical Reports Server (NTRS)
Jukes, T. H.; Holmquist, R.
1972-01-01
Some polypeptide sequences that have been published in the 1972 scientific literature are listed. Only selected sequences are included. The compilation has two objectives. Current information between periods when more comprehensive compilations are published is to be assembled and the use of data that do not include arrangements of unsequenced peptides for 'maximum homology' is to be encouraged.
Communications techniques and equipment: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
This Compilation is devoted to equipment and techniques in the field of communications. It contains three sections. One section is on telemetry, including articles on radar and antennas. The second section describes techniques and equipment for coding and handling data. The third and final section includes descriptions of amplifiers, receivers, and other communications subsystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Chao
Sparx, a new environment for Cryo-EM image processing; Cryo-EM, Single particle reconstruction, principal component analysis; Hardware Req.: PC, MAC, Supercomputer, Mainframe, Multiplatform, Workstation. Software Req.: operating system is Unix; Compiler C++; type of files: source code, object library, executable modules, compilation instructions; sample problem input data. Location/transmission: http://sparx-em.org; User manual & paper: http://sparx-em.org;
An Enrollment Profile of Nebraska Postsecondary Education...A Staff Report to the Commission.
ERIC Educational Resources Information Center
Nebraska Coordinating Commission for Postsecondary Education, Lincoln.
This enrollment profile is a compilation of enrollment information pertinent to postsecondary education planning. The document contains a compilation of enrollment information collected from a variety of state and national data sources including IPEDS and NEEDS surveys. Nebraska's public and private postsecondary institutions have seen a stable…
Read buffer optimizations to support compiler-assisted multiple instruction retry
NASA Technical Reports Server (NTRS)
Alewine, N. J.; Fuchs, W. K.; Hwu, W. M.
1993-01-01
Multiple instruction retry is a recovery mechanism for transient processor faults. We previously developed a compiler-assisted approach to multiple instruction ferry in which a read buffer of size 2N (where N represents the maximum instruction rollback distance) was used to resolve some data hazards while the compiler resolved the remaining hazards. The compiler-assisted scheme was shown to reduce the performance overhead and/or hardware complexity normally associated with hardware-only retry schemes. This paper examines the size and design of the read buffer. We establish a practical lower bound and average size requirement for the read buffer by modifying the scheme to save only the data required for rollback. The study measures the effect on the performance of a DECstation 3100 running ten application programs using six read buffer configurations with varying read buffer sizes. Two alternative configurations are shown to be the most efficient and differed depending on whether split-cycle-saves are assumed. Up to a 55 percent read buffer size reduction is achievable with an average reduction of 39 percent given the most efficient read buffer configuration and a variety of applications.
JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.
Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter
2016-01-01
Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.
NASA Technical Reports Server (NTRS)
Olson, R. J.; Scurlock, J. M. O.; Turner, R. S.; Jennings, S. V.
1995-01-01
Estimating terrestrial net primary production (NPP) using remote-sensing tools and ecosystem models requires adequate ground-based measurements for calibration, parameterization, and validation. These data needs were strongly endorsed at a recent meeting of ecosystem modelers organized by the International Geosphere-Biosphere Program's (IGBP's) Data and Information System (DIS) and its Global Analysis, Interpretation, and Modelling (GAIM) Task Force. To meet these needs, a multinational, multiagency project is being coordinated by the IGBP DIS to compile existing NPP data from field sites and to regionalize NPP point estimates to various-sized grid cells. Progress at Oak Ridge National Laboratory (ORNL) on compiling NPP data for grasslands as part of the IGBP DIS data initiative is described. Site data and associated documentation from diverse field studies are being acquired for selected grasslands and are being reviewed for completeness, consistency, and adequacy of documentation, including a description of sampling methods. Data are being compiled in a database with spatial, temporal, and thematic characteristics relevant to remote sensing and global modeling. NPP data are available from the ORNL Distributed Active Archive Center (DAAC) for biogeochemical dynamics. The ORNL DAAC is part of the Earth Observing System Data and Information System, of the US National Aeronautics and Space Administration.
Data compilation and assessment for water resources in Pennsylvania state forest and park lands
Galeone, Daniel G.
2011-01-01
As a result of a cooperative study between the U.S. Geological Survey and the Pennsylvania Department of Conservation and Natural Resources (PaDCNR), available electronic data were compiled for Pennsylvania state lands (state forests and parks) to allow PaDCNR to initially determine if data exist to make an objective evaluation of water resources for specific basins. The data compiled included water-quantity and water-quality data and sample locations for benthic macroinvertebrates within state-owned lands (including a 100-meter buffer around each land parcel) in Pennsylvania. In addition, internet links or contacts for geographic information system coverages pertinent to water-resources studies also were compiled. Water-quantity and water-quality data primarily available through January 2007 were compiled and summarized for site types that included streams, lakes, ground-water wells, springs, and precipitation. Data were categorized relative to 35 watershed boundaries defined by the Pennsylvania Department of Environmental Protection for resource-management purposes. The primary sources of continuous water-quantity data for Pennsylvania state lands were the U.S. Geological Survey (USGS) and the National Weather Service (NWS). The USGS has streamflow data for 93 surface-water sites located in state lands; 38 of these sites have continuous-recording data available. As of January 2007, 22 of these 38 streamflow-gaging stations were active; the majority of active gaging stations have over 40 years of continuous record. The USGS database also contains continuous ground-water elevation data for 32 wells in Pennsylvania state lands, 18 of which were active as of January 2007. Sixty-eight active precipitation stations (primarily from the NWS network) are located in state lands. The four sources of available water-quality data for Pennsylvania state lands were the USGS, U.S. Environmental Protection Agency, Pennsylvania Department of Environmental Protection (PaDEP), and the Susquehanna River Basin Commission. The water-quality data, which were primarily collected after 1970, were summarized by categorizing the analytical data for each site into major groups (for example, trace metals, pesticides, major ions, etc.) for each type (streams, lakes, ground-water wells, and springs) of data compiled. The number of samples and number of detections for each analyte within each group also were summarized. A total of 410 stream sites and 205 ground-water wells in state lands had water-quality data from the available data sets, and these sites were well-distributed across the state. A total of 107 lakes and 47 springs in state lands had water-quality data from the available data sets, but these data types were not well-distributed across the state; the majority of water-quality data for lakes was in the western or eastern sections of the state and water-quality data for springs was primarily located in the central part of the Lower Susquehanna River Valley. The most common types of water-quality data collected were major ions, trace elements, and nutrients. Physical parameters, such as water temperature, stream discharge, or water level, typically were collected for most water-quality samples. Given the large database available from PaDEP for benthic macroinvertebrates, along with some data from other agencies, there is very good distribution of benthic-macroinvertebrate data for state lands. Benthic macroinvertebrate samples were collected at 1,077 locations in state lands from 1973 to 2006. Most (980 samples) of the benthic-macroinvertebrate samples were collected by PaDEP as part of the state assessment of stream conditions required by the Clean Water Act. Data compiled in this report can be used for various water-resource issues, such as basin-wide water-budget analysis, studies of ecological or instream flow, or water-quality assessments. The determination of an annual water budget in selected basins is best supported by the availab
NASA Astrophysics Data System (ADS)
Roediger, Joel C.; Courteau, Stéphane; Graves, Genevieve; Schiavon, Ricardo P.
2014-01-01
We present an extensive literature compilation of age, metallicity, and chemical abundance pattern information for the 41 Galactic globular clusters (GGCs) studied by Schiavon et al. Our compilation constitutes a notable improvement over previous similar work, particularly in terms of chemical abundances. Its primary purpose is to enable detailed evaluations of and refinements to stellar population synthesis models designed to recover the above information for unresolved stellar systems based on their integrated spectra. However, since the Schiavon sample spans a wide range of the known GGC parameter space, our compilation may also benefit investigations related to a variety of astrophysical endeavors, such as the early formation of the Milky Way, the chemical evolution of GGCs, and stellar evolution and nucleosynthesis. For instance, we confirm with our compiled data that the GGC system has a bimodal metallicity distribution and is uniformly enhanced in the α elements. When paired with the ages of our clusters, we find evidence that supports a scenario whereby the Milky Way obtained its globular clusters through two channels: in situ formation and accretion of satellite galaxies. The distributions of C, N, O, and Na abundances and the dispersions thereof per cluster corroborate the known fact that all GGCs studied so far with respect to multiple stellar populations have been found to harbor them. Finally, using data on individual stars, we verify that stellar atmospheres become progressively polluted by CN(O)-processed material after they leave the main sequence. We also uncover evidence which suggests that the α elements Mg and Ca may originate from more than one nucleosynthetic production site. We estimate that our compilation incorporates all relevant analyses from the literature up to mid-2012. As an aid to investigators in the fields named above, we provide detailed electronic tables of the data upon which our work is based at http://www.astro.queensu.ca/people/Stephane_Courteau/roediger2013/index.html.
Map and data for Quaternary faults and folds in New Mexico
Machette, M.N.; Personius, S.F.; Kelson, K.I.; Haller, K.M.; Dart, R.L.
1998-01-01
The "World Map of Major Active Faults" Task Group is compiling a series of digital maps for the United States and other countries in the Western Hemisphere that show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds; the companion database includes published information on these seismogenic features. The Western Hemisphere effort is sponsored by International Lithosphere Program (ILP) Task Group H-2, whereas the effort to compile a new map and database for the United States is funded by the Earthquake Reduction Program (ERP) through the U.S. Geological Survey. The maps and accompanying databases represent a key contribution to the new Global Seismic Hazards Assessment Program (ILP Task Group II-O) for the International Decade for Natural Disaster Reduction. This compilation, which describes evidence for surface faulting and folding in New Mexico, is the third of many similar State and regional compilations that are planned for the U.S. The compilation for West Texas is available as U.S. Geological Survey Open-File Report 96-002 (Collins and others, 1996 #993) and the compilation for Montana will be released as a Montana Bureau of Mines product (Haller and others, in press #1750).
Pearson, Daniel K.; Bumgarner, Johnathan R.; Houston, Natalie A.; Stanton, Gregory P.; Teeple, Andrew; Thomas, Jonathan V.
2012-01-01
The U.S. Geological Survey, in cooperation with Middle Pecos Groundwater Conservation District, Pecos County, City of Fort Stockton, Brewster County, and Pecos County Water Control and Improvement District No. 1, compiled groundwater, surface-water, water-quality, geophysical, and geologic data for site locations in the Pecos County region, Texas, and developed a geodatabase to facilitate use of this information. Data were compiled for an approximately 4,700 square mile area of the Pecos County region, Texas. The geodatabase contains data from 8,242 sampling locations; it was designed to organize and store field-collected geochemical and geophysical data, as well as digital database resources from the U.S. Geological Survey, Middle Pecos Groundwater Conservation District, Texas Water Development Board, Texas Commission on Environmental Quality,and numerous other State and local databases. The geodatabase combines these disparate database resources into a simple data model. Site locations are geospatially enabled and stored in a geodatabase feature class for cartographic visualization and spatial analysis within a Geographic Information System. The sampling locations are related to hydrogeologic information through the use of geodatabase relationship classes. The geodatabase relationship classes provide the ability to perform complex spatial and data-driven queries to explore data stored in the geodatabase.
Poem: A Fast Monte Carlo Code for the Calculation of X-Ray Transition Zone Dose and Current
1975-01-15
stored on the photon interaction data tape. Following the photoelectric ionization the atom will relax emitting either a fluorescent photon or an Auger 50...shell fluorescence yield CL have been obtained from the Storm and Israel1 9 and 25 Bambynek, et al. compilations, with preference given to the...Bambynek compilation, and stored on the photon inter- action data tape. The mean M fluorescence yield wM is approximated by zero. The total electron source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heistand, R.N.; Atwood, R.A.; Richardson, K.L.
1980-06-01
From 1973 to 1978, Development Engineering, Inc. (DEI), a subsidiary of Paraho Development Corporation, demostrated the Paraho technology for surface oil shale retorting at Anvil Points, Colorado. A considerable amount of environmentally-related research was also conducted. This body of data represents the most comprehensive environmental data base relating to surface retorting that is currently available. In order to make this information available, the DOE Office of Environment has undertaken to compile, assemble, and publish this environmental data. The compilation has been prepared by DEI. This report includes the process characterization, air quality, and water quality categories.
NASA Astrophysics Data System (ADS)
Takács, S.; Vasváry, L.; Tárkányi, F.
1994-05-01
Excitation functions of proton induced reactions on natFe(p, xn) 56Co have been remeasured in the energy region up to 18 MeV using stacked foil technique and standard high resolution gamma-ray spectrometry at the Debrecen MGC-20E cyclotron. Compilation of the available data measured between 1959 and 1993 has been made. The corresponding excitation functions have been reviewed, critical comparison of all the available data was done to obtain the most accurate data set. The feasibility of the evaluated data set was checked by reproducing experimental calibration curves for TLA by calculation.
Carle, S.F.; Glen, J.M.; Langenheim, V.E.; Smith, R.B.; Oliver, H.W.
1990-01-01
The report presents the principal facts for gravity stations compiled for Yellowstone National Park and vicinity. The gravity data were compiled from three sources: Defense Mapping Agency, University of Utah, and U.S. Geological Survey. Part A of the report is a paper copy describing how the compilation was done and presenting the data in tabular format as well as a map; part B is a 5-1/4 inch floppy diskette containing only the data files in ASCII format. Requirements for part B: IBM PC or compatible, DOS v. 2.0 or higher. Files contained on this diskette: DOD.ISO -- File containing the principal facts of the 514 gravity stations obtained from the Defense Mapping Agency. The data are in Plouff format* (see file PFTAB.TEX). UTAH.ISO -- File containing the principal facts of 153 gravity stations obtained from the University of Utah. Data are in Plouff format. USGS.ISO -- File containing the principal facts of 27 gravity stations collected by the U.S. Geological Survey in July 1987. Data are in Plouff format. PFTAB.TXT -- File containing explanation of principal fact format. ACC.TXT -- File containing explanation of accuracy codes.
Water use trends in Washington, 1985-2005
Lane, R.C.
2010-01-01
Since 1950, the U.S. Geological Survey Washington Water Science Center (USGS-WAWSC) has collected, compiled, and published, at 5-year intervals, statewide estimates of the amounts of water withdrawn and used for various purposes in Washington State. As new data and methods became available, some of the original datasets were recompiled. The most recent versions of these datasets were used in this fact sheet. The datasets are available online along with other USGS-WAWSC water-use publications at the USGS-WAWSC water use web page: http://wa.water.usgs.gov/data/wuse/. Values on these datasets and in this fact sheet may not sum to the indicated total due to independent rounding. Due to variations in data requirements, collection methods, terminology, and data sources, the direct assessment of water-use trends between compilations is difficult. This fact sheet focuses on the trends in total State and public-supplied populations, freshwater withdrawals and use, public-supply withdrawals and deliveries, and crop irrigation withdrawals and acreage in Washington from 1985 through 2005. These four categories were included in all five compilations and were the most stable in terms of data requirements, collection methods, terminology, and data sources.
Alaska digital aeromagnetic database description
Connard, G.G.; Saltus, R.W.; Hill, P.L.; Carlson, L.; Milicevic, B.
1999-01-01
Northwest Geophysical Associates, Inc. (NGA) was contracted by the U.S. Geological Survey (USGS) to construct a database containing original aeromagnotic data (in digital form) from surveys, maps and grids for the State of Alaska from existing public-domain magnetic data. This database facilitates thedetailed study and interpretation of aeromagnetic data along flightline profiles and allows construction of custom grids for selected regions of Alaska. The database is linked to and reflect? the work from the statewide gridded compilation completed under a prior contract. The statewide gridded compilation is also described in Saltus and Simmons (1997) and in Saltus and others (1999). The database area generally covers the on-shore portion of the State of Alaska and the northern Gulf of Alaska excluding the Aleutian Islands. The area extends from 54'N to 72'N latitude and 129'W to 169'W longitude. The database includes the 85 surveys that were included in the previous statewide gridded compilation. Figure (1) shows the extents of the 85 individual data sets included in the statewide grids. NGA subcontracted a significant portion of the work described in this report to Paterson, Grant, and Watson Limited (PGW). Prior work by PGW (described in Meyer and Saltus, 1995 and Meyer and others, 1998) for the interior portion of Alrska (INTAK) is included in this present study. The previous PGW project compiled 25 of the 85 surveys included in the statewide grids. PGW also contributed 10 additional data sets that were not included in either of the prior contracts or the statewide grids. These additional data sets are included in the current project in the interest of making the database as complete as possible. Figure (2) shows the location of the additional data sets.
Assessing the quality of data aggregated by antiretroviral treatment clinics in Malawi.
Makombe, Simon D; Hochgesang, Mindy; Jahn, Andreas; Tweya, Hannock; Hedt, Bethany; Chuka, Stuart; Yu, Joseph Kwong-Leung; Aberle-Grasse, John; Pasulani, Olesi; Bailey, Christopher; Kamoto, Kelita; Schouten, Erik J; Harries, Anthony D
2008-04-01
As national antiretroviral treatment (ART) programmes scale-up, it is essential that information is complete, timely and accurate for site monitoring and national planning. The accuracy and completeness of reports independently compiled by ART facilities, however, is often not known. This study assessed the quality of quarterly aggregate summary data for April to June 2006 compiled and reported by ART facilities ("site report") as compared to the "gold standard" facility summary data compiled independently by the Ministry of Health supervision team ("supervision report"). Completeness and accuracy of key case registration and outcome variables were compared. Data were considered inaccurate if variables from the site reports were missing or differed by more than 5% from the supervision reports. Additionally, we compared the national summaries obtained from the two data sources. Monitoring and evaluation of Malawi's national ART programme is based on WHO's recommended tools for ART monitoring. It includes one master card for each ART patient and one patient register at each ART facility. Each quarter, sites complete cumulative cohort analyses and teams from the Ministry of Health conduct supervisory visits to all public sector ART sites to ensure the quality of reported data. Most sites had complete case registration and outcome data; however many sites did not report accurate data for several critical data fields, including reason for starting, outcome and regimen. The national summary using the site reports resulted in a 12% undercount in the national total number of persons on first-line treatment. Several facility-level characteristics were associated with data quality. While many sites are able to generate complete data summaries, the accuracy of facility reports is not yet adequate for national monitoring. The Ministry of Health and its partners should continue to identify and support interventions such as supportive supervision to build sites' capacity to maintain and compile quality data to ensure that accurate information is available for site monitoring and national planning.
On the derivation of a full life table from mortality data recorded in five-year age groups.
Pollard, J H
1989-01-01
Mortality data are often gathered using 5-year age groups rather than individual years of life. Furthermore, it is common practice to use a large open-ended interval (such as 85 and over) for mortality data at the older ages. These limitations of the data pose problems for the actuary or demographer who wishes to compile a full and accurate life table using individual years of life. The author devises formulae which handle these problems. He also devises methods for handling mortality during the 1st year of life and for dealing with other technical problems which arise in the compilation of the full life table from grouped data.
Nuclear Data Sheets for A = 21
DOE Office of Scientific and Technical Information (OSTI.GOV)
Firestone, R.B.
2015-07-15
This evaluation of A = 21 has been updated from previous evaluations published in 2004Fi10, 1998En04, 1990En08, and 1978En02. Coverage includes properties of adopted levels and γ-rays, decay-scheme data (energies, intensities and placement of radiations), and cross reference entries. The following tables continue the tradition of showing the systematic relationships between levels in A = 21. Much of the new data in this evaluations were taken directly from the xundl database, compiled under the direction of Balraj Singh, McMaster University. The evaluator is particularly appreciative of the efforts of the xundl compilers.
Clark, Allan K.; Pedraza, Diane E.
2013-01-01
Data for 141 springs within and surrounding the Trinity aquifer outcrops in northern Bexar County were compiled from existing reports and databases. These data were augmented with selected data collected onsite, including the location, discharge, and water-quality characteristics of selected springs, and were entered into the geodatabase. The Trinity aquifer in central Texas is commonly divided into the upper, middle, and lower Trinity aquifers; all of the information that was compiled pertaining to the aquifer is for the upper and middle Trinity aquifers.
Final report: Compiled MPI. Cost-Effective Exascale Application Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gropp, William Douglas
2015-12-21
This is the final report on Compiled MPI: Cost-Effective Exascale Application Development, and summarizes the results under this project. The project investigated runtime enviroments that improve the performance of MPI (Message-Passing Interface) programs; work at Illinois in the last period of this project looked at optimizing data access optimizations expressed with MPI datatypes.
ERIC Educational Resources Information Center
Novak, Gordon S., Jr.
GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…
NASA Astrophysics Data System (ADS)
Dossing, A.; Olesen, A. V.; Forsberg, R.
2010-12-01
Results of an 800 x 800 km aero-gravity and aeromagnetic survey (LOMGRAV) of the southern Lomonosov Ridge and surrounding area are presented. The survey was acquired by the Danish National Space Center, DTU in cooperation with National Resources Canada in spring 2009 as a net of ~NE-SW flight lines spaced 8-10 km apart. Nominal flight level was 2000 ft. We have compiled a detailed 2.5x2.5 km gravity anomaly grid based on the LOMGRAV data and existing data from the southern Arctic Ocean (NRL98/99) and the North Greenland continental margin (KMS98/99). The gravity grid reveals detailed, elongated high-low anomaly patterns over the Lomonosov Ridge which is interpreted as the presence of narrow ridges and subbasins. Distinct local topography is also interpreted over the southernmost part of the Lomonosov Ridge where existing bathymetry compilations suggest a smooth topography due to the lack of data. A new bathymetry model is presented for the region predicted by formalized inversion of the available gravity data. Finally, a detailed magnetic anomaly grid has been compiled from the LOMGRAV data and existing NRL98/99 and PMAP data. New tectonic features are revealed, particularly in the Amerasia Basin, compared with existing magnetic anomaly data from the region.
Digital depth horizon compilations of the Alaskan North Slope and adjacent Arctic regions
Saltus, Richard W.; Bird, Kenneth J.
2003-01-01
Data have been digitized and combined to create four detailed depth horizon grids spanning the Alaskan North Slope and adjacent offshore areas. These map horizon compilations were created to aid in petroleum system modeling and related studies. Topography/bathymetry is extracted from a recent Arctic compilation of global onshore DEM and satellite altimetry and ship soundings offshore. The Lower Cretaceous Unconformity (LCU), the top of the Triassic Shublik Formation, and the pre-Carboniferous acoustic basement horizon grids are created from numerous seismic studies, drill hole information, and interpolation. These horizons were selected because they mark critical times in the geologic evolution of the region as it relates to petroleum. The various horizons clearly show the major tectonic elements of this region including the Brooks Range, Colville Trough, Barrow Arch, Hanna Trough, Chukchi Platform, Nuwuk Basin, Kaktovik Basin, and Canada Basin. The gridded data are available in a variety of data formats for use in regional studies.
du Bray, Edward A.; John, David A.; Putirka, Keith; Cousens, Brian L.
2009-01-01
Volcanic rocks that form the southern segment of the Cascades magmatic arc are an important manifestation of Cenozoic subduction and associated magmatism in western North America. Until recently, these rocks had been little studied and no systematic compilation of existing composition data had been assembled. This report is a compilation of all available chemical data for igneous rocks that constitute the southern segment of the ancestral Cascades magmatic arc and complement a previously completed companion compilation that pertains to rocks that constitute the northern segment of the arc. Data for more than 2,000 samples from a diversity of sources were identified and incorporated in the database. The association between these igneous rocks and spatially and temporally associated mineral deposits is well established and suggests a probable genetic relationship. The ultimate goal of the related research is an evaluation of the time-space-compositional evolution of magmatism associated with the southern Cascades arc segment and identification of genetic associations between magmatism and mineral deposits in this region.
PCMdb: Pancreatic Cancer Methylation Database
NASA Astrophysics Data System (ADS)
Nagpal, Gandharva; Sharma, Minakshi; Kumar, Shailesh; Chaudhary, Kumardeep; Gupta, Sudheer; Gautam, Ankur; Raghava, Gajendra P. S.
2014-02-01
Pancreatic cancer is the fifth most aggressive malignancy and urgently requires new biomarkers to facilitate early detection. For providing impetus to the biomarker discovery, we have developed Pancreatic Cancer Methylation Database (PCMDB, http://crdd.osdd.net/raghava/pcmdb/), a comprehensive resource dedicated to methylation of genes in pancreatic cancer. Data was collected and compiled manually from published literature. PCMdb has 65907 entries for methylation status of 4342 unique genes. In PCMdb, data was compiled for both cancer cell lines (53565 entries for 88 cell lines) and cancer tissues (12342 entries for 3078 tissue samples). Among these entries, 47.22% entries reported a high level of methylation for the corresponding genes while 10.87% entries reported low level of methylation. PCMdb covers five major subtypes of pancreatic cancer; however, most of the entries were compiled for adenocarcinomas (88.38%) and mucinous neoplasms (5.76%). A user-friendly interface has been developed for data browsing, searching and analysis. We anticipate that PCMdb will be helpful for pancreatic cancer biomarker discovery.
Wood, David B.
2007-11-01
Between 1951 and 1992, 828 underground tests were conducted on the Nevada National Security Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada National Security Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples can not be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.
Hadley, Heidi K.
2000-01-01
Selected nitrogen and phosphorus (nutrient), suspended-sediment and total suspended-solids surface-water data were compiled from January 1980 through December 1995 within the Great Salt Lake Basins National Water-Quality Assessment study unit, which extends from southeastern Idaho to west-central Utah and from Great Salt Lake to the Wasatch and western Uinta Mountains. The data were retrieved from the U.S. Geological Survey National Water Information System and the State of Utah, Department of Environmental Quality, Division of Water Quality database. The Division of Water Quality database includes data that are submitted to the U.S. Environmental Protection Agency STOrage and RETrieval system. Water-quality data included in this report were selected for surface-water sites (rivers, streams, and canals) that had three or more nutrient, suspended-sediment, or total suspended-solids analyses. Also, 33 percent or more of the measurements at a site had to include discharge, and, for non-U.S. Geological Survey sites, there had to be 2 or more years of data. Ancillary data for parameters such as water temperature, pH, specific conductance, streamflow (discharge), dissolved oxygen, biochemical oxygen demand, alkalinity, and turbidity also were compiled, as available. The compiled nutrient database contains 13,511 samples from 191 selected sites. The compiled suspended-sediment and total suspended-solids database contains 11,642 samples from 142 selected sites. For the nutrient database, the median (50th percentile) sample period for individual sites is 6 years, and the 75th percentile is 14 years. The median number of samples per site is 52 and the 75th percentile is 110 samples. For the suspended-sediment and total suspended-solids database, the median sample period for individual sites is 9 years, and the 75th percentile is 14 years. The median number of samples per site is 76 and the 75th percentile is 120 samples. The compiled historical data are being used in the basinwide sampling strategy to characterize the broad-scale geographic and seasonal water-quality conditions in relation to major contaminant sources and background conditions. Data for this report are stored on a compact disc.
Effective Vectorization with OpenMP 4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huber, Joseph N.; Hernandez, Oscar R.; Lopez, Matthew Graham
This paper describes how the Single Instruction Multiple Data (SIMD) model and its extensions in OpenMP work, and how these are implemented in different compilers. Modern processors are highly parallel computational machines which often include multiple processors capable of executing several instructions in parallel. Understanding SIMD and executing instructions in parallel allows the processor to achieve higher performance without increasing the power required to run it. SIMD instructions can significantly reduce the runtime of code by executing a single operation on large groups of data. The SIMD model is so integral to the processor s potential performance that, if SIMDmore » is not utilized, less than half of the processor is ever actually used. Unfortunately, using SIMD instructions is a challenge in higher level languages because most programming languages do not have a way to describe them. Most compilers are capable of vectorizing code by using the SIMD instructions, but there are many code features important for SIMD vectorization that the compiler cannot determine at compile time. OpenMP attempts to solve this by extending the C++/C and Fortran programming languages with compiler directives that express SIMD parallelism. OpenMP is used to pass hints to the compiler about the code to be executed in SIMD. This is a key resource for making optimized code, but it does not change whether or not the code can use SIMD operations. However, in many cases critical functions are limited by a poor understanding of how SIMD instructions are actually implemented, as SIMD can be implemented through vector instructions or simultaneous multi-threading (SMT). We have found that it is often the case that code cannot be vectorized, or is vectorized poorly, because the programmer does not have sufficient knowledge of how SIMD instructions work.« less
A Regional, Integrated Monitoring System for the Hydrology of the Pan-Arctic Land Mass
NASA Technical Reports Server (NTRS)
Serreze, Mark; Barry, Roger; Nolin, Anne; Armstrong, Richard; Zhang, Ting-Jung; Vorosmarty, Charles; Lammers, Richard; Frolking, Steven; Bromwich, David; McDonald, Kyle
2005-01-01
Work under this NASA contract developed a system for monitoring and historical analysis of the major components of the pan-Arctic terrestrial water cycle. It is known as Arctic-RIMS (Regional Integrated Hydrological Monitoring System for the Pan-Arctic Landmass). The system uses products from EOS-era satellites, numerical weather prediction models, station records and other data sets in conjunction with an atmosphere-land surface water budgeting scheme. The intent was to compile operational (at 1-2 month time lags) gridded fields of precipitation (P), evapotranspiration (ET), P-ET, soil moisture, soil freeze/thaw state, active layer thickness, snow extent and its water equivalent, soil water storage, runoff and simulated discharge along with estimates of non-closure in the water budget. Using "baseline" water budgeting schemes in conjunction with atmospheric reanalyses and pre-EOS satellite data, water budget fields were conjunction with atmospheric reanalyses and pre-EOS satellite data, water budget fields were compiled to provide historical time series. The goals as outlined in the original proposal can be summarized as follows: 1) Use EOS data to compile hydrologic products for the pan-Arctic terrestrial regions including snowcover/snow water equivalent (SSM/A MODIS, AMSR) and near-surface freeze/thaw dynamics (Sea Winds on QuikSCAT and ADEOS I4 SSMI and AMSR). 2) Implement Arctic-RIMS to use EOS data streams, allied fields and hydrologic models to produce allied outputs that fully characterize pan-Arctic terrestrial and aerological water budgets. 3) Compile hydrologically-based historical products providing a long-term baseline of spatial and temporal variability in the water cycle.
Greninger, Mark L.; Klemperer, Simon L.; Nokleberg, Warren J.
1999-01-01
The accompanying directory structure contains a Geographic Information Systems (GIS) compilation of geophysical, geological, and tectonic data for the Circum-North Pacific. This area includes the Russian Far East, Alaska, the Canadian Cordillera, linking continental shelves, and adjacent oceans. This GIS compilation extends from 120?E to 115?W, and from 40?N to 80?N. This area encompasses: (1) to the south, the modern Pacific plate boundary of the Japan-Kuril and Aleutian subduction zones, the Queen Charlotte transform fault, and the Cascadia subduction zone; (2) to the north, the continent-ocean transition from the Eurasian and North American continents to the Arctic Ocean; (3) to the west, the diffuse Eurasian-North American plate boundary, including the probable Okhotsk plate; and (4) to the east, the Alaskan-Canadian Cordilleran fold belt. This compilation should be useful for: (1) studying the Mesozoic and Cenozoic collisional and accretionary tectonics that assembled this continental crust of this region; (2) studying the neotectonics of active and passive plate margins in this region; and (3) constructing and interpreting geophysical, geologic, and tectonic models of the region. Geographic Information Systems (GIS) programs provide powerful tools for managing and analyzing spatial databases. Geological applications include regional tectonics, geophysics, mineral and petroleum exploration, resource management, and land-use planning. This CD-ROM contains thematic layers of spatial data-sets for geology, gravity field, magnetic field, oceanic plates, overlap assemblages, seismology (earthquakes), tectonostratigraphic terranes, topography, and volcanoes. The GIS compilation can be viewed, manipulated, and plotted with commercial software (ArcView and ArcInfo) or through a freeware program (ArcExplorer) that can be downloaded from http://www.esri.com for both Unix and Windows computers using the button below.
Recent Efforts in Data Compilations for Nuclear Astrophysics
NASA Astrophysics Data System (ADS)
Dillmann, Iris
2008-05-01
Some recent efforts in compiling data for astrophysical purposes are introduced, which were discussed during a JINA-CARINA Collaboration meeting on ``Nuclear Physics Data Compilation for Nucleosynthesis Modeling'' held at the ECT* in Trento/Italy from May 29th-June 3rd, 2007. The main goal of this collaboration is to develop an updated and unified nuclear reaction database for modeling a wide variety of stellar nucleosynthesis scenarios. Presently a large number of different reaction libraries (REACLIB) are used by the astrophysics community. The ``JINA Reaclib Database'' on http://www.nscl.msu.edu/~nero/db/ aims to merge and fit the latest experimental stellar cross sections and reaction rate data of various compilations, e.g. NACRE and its extension for Big Bang nucleosynthesis, Caughlan and Fowler, Iliadis et al., and KADoNiS. The KADoNiS (Karlsruhe Astrophysical Database of Nucleosynthesis in Stars, http://nuclear-astrophysics.fzk.de/kadonis) project is an online database for neutron capture cross sections relevant to the s process. The present version v0.2 is already included in a REACLIB file from Basel university (http://download.nucastro.org/astro/reaclib). The present status of experimental stellar (n,γ) cross sections in KADoNiS is shown. It contains recommended cross sections for 355 isotopes between 1H and 210Bi, over 80% of them deduced from experimental data. A ``high priority list'' for measurements and evaluations for light charged-particle reactions set up by the JINA-CARINA collaboration is presented. The central web access point to submit and evaluate new data is provided by the Oak Ridge group via the http://www.nucastrodata.org homepage. ``Workflow tools'' aim to make the evaluation process transparent and allow users to follow the progress.
The pre-orogenic detrital zircon record of the Variscan orogeny: Preliminary results
NASA Astrophysics Data System (ADS)
Stephan, Tobias; Kroner, Uwe
2017-04-01
To test plate-tectonic constellations in consideration of the long-term development of sedimentary transport paths, temporally and spatially highly resolved records of provenance analysis are mandatory. The interpretation of existing studies focus on small-scale areas within an orogen thereby neglecting the differing distribution of provenance data in the entire orogenic system. This study reviews a large data set of compiled geochronological data to document the development of pre-orogenic tectonic units on the example of the Variscan orogeny. Constrained by tectonic and geological models, the temporal distribution of U-Pb detrital zircon ages, used as a proxy for sedimentary provenance, shows that some minima and maxima of zircon abundance are nearly synchronous for thousands of kilometres along the orogeny. Age spectra of Precambrian to Lower Palaeozoic samples were constructed on the basis of 38729 U-Pb ages from 685 samples that were compiled from 102 publications. The age compilation combines thermal ionization mass spectrometry (TIMS), laser ablation-inductively coupled plasma-mass spectrometer (LA-ICP-MS), sensitive high-resolution ion microprobe (SHRIMP), and secondary ion mass spectrometry (SIMS) analyses. The data was re-processed using a common age calculation and concordance filter to ensure comparability. The concordance of each zircon grain was calculated from 206Pb/238U and 207Pb/235U ages to guarantee that only concordant grains, i.e., with <10% normal and <5% reverse discordance, were included in the age compilation. In order to ignore a metamorphic overprint and hence a blur of the younger age spectra, the compilation is constrained to age data older than 400 Ma only. If a precise sample age is not documented by the author, the weighted-mean age of the youngest zircon population (n > 3) is used for the maximum age of deposition. In addition to the location of >600 samples, the precise depositional ages result in a spatially and temporally high resolution. To avoid the different levels of analytical precision of the compiled TIMS, LA-ICP-MS, SHRIMP, and SIMS data, detrital zircon ages are plotted as kernel density estimates. Spatial and temporal distribution of the kernel density estimates, as well as further statistical techniques (e.g. multidimensional scaling) are used to discriminate groups of similar age distributions. Preliminary results reveal four major sources for the pre-orogenic sedimentary units (i.e. Saharan Metacraton, West-African craton, Amazonas craton and Fennoscandian shield). The mixing of several source signals in Gondwana derived sediment spectra point to vast deltaic systems along the Gondwanan shelf area.
Constraints on cosmological parameters in power-law cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Sarita; Singh, J.K.; Altaibayeva, A.
In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H{sub 0} (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not withmore » H(z) data. However, the constraints obtained on and i.e. H{sub 0} average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details.« less
Dataset used to improve liquid water absorption models in the microwave
Turner, David
2015-12-14
Two datasets, one a compilation of laboratory data and one a compilation from three field sites, are provided here. These datasets provide measurements of the real and imaginary refractive indices and absorption as a function of cloud temperature. These datasets were used in the development of the new liquid water absorption model that was published in Turner et al. 2015.
Scientific names of organisms: attribution, rights, and licensing
2014-01-01
Background As biological disciplines extend into the ‘big data’ world, they will need a names-based infrastructure to index and interconnect distributed data. The infrastructure must have access to all names of all organisms if it is to manage all information. Those who compile lists of species hold different views as to the intellectual property rights that apply to the lists. This creates uncertainty that impedes the development of a much-needed infrastructure for sharing biological data in the digital world. Findings The laws in the United States of America and European Union are consistent with the position that scientific names of organisms and their compilation in checklists, classifications or taxonomic revisions are not subject to copyright. Compilations of names, such as classifications or checklists, are not creative in the sense of copyright law. Many content providers desire credit for their efforts. Conclusions A ‘blue list’ identifies elements of checklists, classifications and monographs to which intellectual property rights do not apply. To promote sharing, authors of taxonomic content, compilers, intermediaries, and aggregators should receive citable recognition for their contributions, with the greatest recognition being given to the originating authors. Mechanisms for achieving this are discussed. PMID:24495358
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haas, Nicholas Q; Gillen, Robert E; Karnowski, Thomas P
MathWorks' MATLAB is widely used in academia and industry for prototyping, data analysis, data processing, etc. Many users compile their programs using the MATLAB Compiler to run on workstations/computing clusters via the free MATLAB Compiler Runtime (MCR). The MCR facilitates the execution of code calling Application Programming Interfaces (API) functions from both base MATLAB and MATLAB toolboxes. In a Linux environment, a sizable number of third-party runtime dependencies (i.e. shared libraries) are necessary. Unfortunately, to the MTLAB community's knowledge, these dependencies are not documented, leaving system administrators and/or end-users to find/install the necessary libraries either as runtime errors resulting frommore » them missing or by inspecting the header information of Executable and Linkable Format (ELF) libraries of the MCR to determine which ones are missing from the system. To address various shortcomings, Docker Images based on Community Enterprise Operating System (CentOS) 7, a derivative of Redhat Enterprise Linux (RHEL) 7, containing recent (2015-2017) MCR releases and their dependencies were created. These images, along with a provided sample Docker Compose YAML Script, can be used to create a simulated computing cluster where MATLAB Compiler created binaries can be executed using a sample Slurm Workload Manager script.« less
NASA Technical Reports Server (NTRS)
Malvestuto, Frank S.; Gale, Lawrence J.; Wood, John H.
1947-01-01
A compilation of free-spinning-airplane model data on the spin and recovery characteristics of 111 airplanes is presented. These data were previously published in separate memorandum reports and were obtained from free-spinning tests in the Langley 15-foot and the Langley 20-foot free-spinning tunnels. The model test data presented include the steady-spin and recovery characteristics of each model for various combinations of aileron and elevator deflections and for various loadings and dimensional configurations. Dimensional data, mass data, and a three-view drawing of the corresponding free-spinning tunnel model are also presented for each airplane. The data presented should be of value to designers and should facilitate the design of airplanes incorporating satisfactory spin-recovery characteristics.
Recent advances in the compilation of holocene relative Sea-level database in North America
NASA Astrophysics Data System (ADS)
Horton, B.; Vacchi, M.; Engelhart, S. E.; Nikitina, D.
2015-12-01
Reconstruction of relative sea level (RSL) has implications for investigation of crustal movements, calibration of earth rheology models and the reconstruction of ice sheets. In recent years, efforts were made to create RSL databases following a standardized methodology. These regional databases provided a framework for developing our understanding of the primary mechanisms of RSL change since the Last Glacial Maximum and a long-term baseline against which to gauge changes in sea-level during the 20th century and forecasts for the 21st. Here we present two quality-controlled Holocene RSL database compiled for North America. Along the Pacific coast of North America (British Columbia, Canada to California, USA), our re-evaluation of sea-level indicators from geological and archaeological investigations yield 841 RSL data-points mainly from salt and freshwater wetlands or adjacent estuarine sediment as well as from isolation basin. Along the Atlantic coast of North America (Hudson Bay, Canada to South Carolina, USA), we are currently compiling a database including more than 2000 RSL data-points from isolation basin, salt and freshwater wetlands, beach ridges and intratidal deposits. We outline the difficulties and solutions we made to compile databases in such different depostional environment. We address complex tectonics and the framework to compare such large variability of RSL data-point. We discuss the implications of our results for the glacio-isostatic adjustment (GIA) models in the two studied regions.
Estimated use of water in the New England States, 1990
Korzendorfer, B.A.; Horn, M.A.
1995-01-01
Data on freshwater withdrawals in 1990 were compiled for the New England States. An estimated 4,160 Mgal/d (million gallons per day) of freshwater was withdrawn in 1990 in the six States. Of this total, 1,430 Mgal/d was withdrawn by public suppliers and delivered to users, and 2,720 Mgal/d was withdrawn by domestic, commercial, industrial, agricultural, mining, and thermoelectric power-generation users. More than 83 percent of the freshwater was from surface-water sources. Massachusetts, with the largest population, had the largest withdrawals of water. Data on saline water withdraw, and instream flow at hydroelectric plants were also compiled. An estimated 9, 170 Mgal/d of saline water was used for thermoelectric-power generation and industrial use in Connecticut, Maine, Massachusetts, New Hampshire, and Rhode Island. Return flow fro public wastewater-treatment plants totaled 1,750 Mgal/d; more than half (55 percent) of this return flow was in Massachusetts. In addition, about 178,000 Mgal/d was used for instream hydroelectric power generation; the largest users were Maine (about 83,000 Mgal/d) and New Hampshire (46,000 Mgal/d). These data, some of which were based on site-specific water-use information and some based on estimation techniques, were compiled through joint efforts by the U.S. Geological Survey and State cooperators for the 1990 national water-use compilation.
Photogrammetric application of viking orbital photography
Wu, S.S.C.; Elassal, A.A.; Jordan, R.; Schafer, F.J.
1982-01-01
Special techniques are described for the photogrammetric compilation of topographic maps and profiles from stereoscopic photographs taken by the two Viking Orbiter spacecraft. These techniques were developed because the extremely narrow field of view of the Viking cameras precludes compilation by conventional photogrammetric methods. The techniques adjust for internal consistency the Supplementary Experiment Data Record (SEDR-the record of spacecraft orientation when photographs were taken) and the computation of geometric orientation parameters of the stereo models. A series of contour maps of Mars is being compiled by these new methods using a wide variety of Viking Orbiter photographs, to provide the planetary research community with topographic information. ?? 1982.
Hemingway, Bruch S.; Seal, Robert R.; Chou, I-Ming
2002-01-01
Enthalpy of formation, Gibbs energy of formation, and entropy values have been compiled from the literature for the hydrated ferrous sulfate minerals melanterite, rozenite, and szomolnokite, and a variety of other hydrated sulfate compounds. On the basis of this compilation, it appears that there is no evidence for an excess enthalpy of mixing for sulfate-H2O systems, except for the first H2O molecule of crystallization. The enthalpy and Gibbs energy of formation of each H2O molecule of crystallization, except the first, in the iron(II) sulfate - H2O system is -295.15 and -238.0 kJ?mol-1, respectively. The absence of an excess enthalpy of mixing is used as the basis for estimating thermodynamic values for a variety of ferrous, ferric, and mixed-valence sulfate salts of relevance to acid-mine drainage systems.
Application of remote sensor data to geologic analysis of the Bonanza Test Site Colorado
NASA Technical Reports Server (NTRS)
Lee, K. (Compiler)
1973-01-01
A geologic map of the Bonanza Test Site is nearing completion. Using published large scale geologic maps from various sources, the geology of the area is being compiled on a base scaled at 1:250,000. Sources of previously published geologic mapping include: (1) USGS Bulletins; (2) professional papers and geologic quadrangle maps; (3) Bureau of Mines reports; (4) Colorado School of Mines quarterlies; and (5) Rocky Mountain Association of Geologist Guidebooks. This compilation will be used to evaluate ERTS, Skylab, and remote sensing underflight data.
NACA Investigation of Fuel Performance in Piston-Type Engines
NASA Technical Reports Server (NTRS)
Barnett, Henry C
1951-01-01
This report is a compilation of many of the pertinent research data acquired by the National Advisory Committee for Aeronautics on fuel performance in piston engines. The original data for this compilation are contained in many separate NACA reports which have in the present report been assembled in logical chapters that summarize the main conclusions of the various investigations. Complete details of each investigation are not included in this summary; however, such details may be found, in the original reports cited at the end of each chapter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Supinski, B.; Caliga, D.
2017-09-28
The primary objective of this project was to develop memory optimization technology to efficiently deliver data to, and distribute data within, the SRC-6's Field Programmable Gate Array- ("FPGA") based Multi-Adaptive Processors (MAPs). The hardware/software approach was to explore efficient MAP configurations and generate the compiler technology to exploit those configurations. This memory accessing technology represents an important step towards making reconfigurable symmetric multi-processor (SMP) architectures that will be a costeffective solution for large-scale scientific computing.
An implementation and analysis of the Abstract Syntax Notation One and the basic encoding rules
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The details of abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively solve the problem of data transfer across incompatible host environments are presented, and a compiler that was built to automate their use is described. Experiences with this compiler are also discussed which provide a quantitative analysis of the performance costs associated with the application of these standards. An evaluation is offered as to how well suited ASN.1 and BER are in solving the common data representation problem.
Krause, Richard E.
1984-01-01
A compilation of ground-water data that have been collected for nearly 100 years in the coastal area of Georgia is presented in this report. The compilation of pertinent data indicates what information is available for use in the evaluation of the ground-water resources of the 13 counties of coastal Georgia. Also included in this report is a fairly complete discussion of previous and ongoing investigations and monitoring networks, and an extensive list of references. Maps at 1:24,000 and 1:1,000,000 scales contain well locations and identifiers for all wells in the Ground Water Site Inventory (GWSI) data base of the National Water Data Storage and retrieval System (WATSTORE). Tabular summaries of selected site information from GWSI, including well identifiers and names, latitude-longitude location, depth of well, altitude of land surface, and use of water are presented. Water-use data from the National Water Use Data System, and water use for irrigation from the University of Georgia, Department of Agriculture survey, also are tabulated. Also included are pertinent information on geophysical surveys and data obtained, and proposed project activities, particularly test-monitor well drilling. The data in this report were collected and compiled as part of the cooperative activities between the U.S. Geological Survey and other agencies.
The NASA earth resources spectral information system: A data compilation, second supplement
NASA Technical Reports Server (NTRS)
Vincent, R. K.
1973-01-01
The NASA Earth Resources Spectral Information System (ERSIS) and the information contained therein are described. It is intended for use as a second supplement to the NASA Earth Resources Spectral Information System: A Data Compilation, NASA CR-31650-24-T, May 1971. The current supplement includes approximately 100 rock and mineral, and 375 vegetation directional reflectance spectral curves in the optical region from 0.2 to 22.0 microns. The data were categorized by subject and each curve plotted on a single graph. Each graph is fully titled to indicate curve source and indexed by subject to facilitate user retrieval from ERSIS magnetic tape records.
42 CFR 456.244 - Data sources for studies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... appropriate hospital data. (b) External organizations that compile statistics, design profiles, and produce other comparative data. (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.144 - Data sources for studies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... appropriate hospital data; (b) External organizations that compile statistics, design profiles, and produce other comparative data; (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.144 - Data sources for studies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... appropriate hospital data; (b) External organizations that compile statistics, design profiles, and produce other comparative data; (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.144 - Data sources for studies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... appropriate hospital data; (b) External organizations that compile statistics, design profiles, and produce other comparative data; (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.244 - Data sources for studies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... appropriate hospital data. (b) External organizations that compile statistics, design profiles, and produce other comparative data. (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.244 - Data sources for studies.
Code of Federal Regulations, 2013 CFR
2013-10-01
... appropriate hospital data. (b) External organizations that compile statistics, design profiles, and produce other comparative data. (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.144 - Data sources for studies.
Code of Federal Regulations, 2014 CFR
2014-10-01
... appropriate hospital data; (b) External organizations that compile statistics, design profiles, and produce other comparative data; (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.244 - Data sources for studies.
Code of Federal Regulations, 2010 CFR
2010-10-01
... appropriate hospital data. (b) External organizations that compile statistics, design profiles, and produce other comparative data. (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.244 - Data sources for studies.
Code of Federal Regulations, 2011 CFR
2011-10-01
... appropriate hospital data. (b) External organizations that compile statistics, design profiles, and produce other comparative data. (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
42 CFR 456.144 - Data sources for studies.
Code of Federal Regulations, 2012 CFR
2012-10-01
... appropriate hospital data; (b) External organizations that compile statistics, design profiles, and produce other comparative data; (c) Cooperative endeavors with— (1) QIOs; (2) Fiscal agents; (3) Other service...
Minor, S.A.; Vick, G.S.; Carr, M.D.; Wahl, R.R.
1996-01-01
This map database, identified as Faults, lineaments, and earthquake epicenters digital map of the Pahute Mesa 30' X 60' quadrangle, Nevada, has been approved for release and publication by the Director of the USGS. Although this database has been subjected to rigorous review and is substantially complete, the USGS reserves the right to revise the data pursuant to further analysis and review. Furthermore, it is released on condition that neither the USGS nor the United States Government may be held liable for any damages resulting from its authorized or unauthorized use. This digital map compilation incorporates fault, air photo lineament, and earthquake epicenter data from within the Pahute Mesa 30' by 60' quadrangle, southern Nye County, Nevada (fig. 1). The compilation contributes to the U.S. Department of Energy's Yucca Mountain Project, established to determine whether or not the Yucca Mountain site is suitable for the disposal of high-level nuclear waste. Studies of local and regional faulting and earthquake activity, including the features depicted in this compilation, are carried out to help characterize seismic hazards and tectonic processes that may be relevant to the future stability of Yucca Mountain. The Yucca Mountain site is located in the central part of the Beatty 30' by 60' quadrangle approximately 15 km south of the south edge of the Pahute Mesa quadrangle (fig. 1). The U.S. Geological Survey participates in studies of the Yucca Mountain site under Interagency Agreement DE-AI08-78ET44802. The map compilation is only available on line as a digital database in ARC/INFO ASCII (Generate) and export formats. The database can be downloaded via 'anonymous ftp' from a USGS system named greenwood.cr.usgs.gov (136.177.48.5). The files are located in a directory named /pub/open-file-reports/ofr-96-0262. This directory contains a text document named 'README.1 ST' that contains database technical and explanatory documentation, including instructions for uncompressing the bundled (tar) file. In displaying the compilation it is important to note that the map data set is considered accurate when depicted at a scale of about 1:100,000; displaying the compilation at scales significantly larger than this may result in distortions and (or) mislocations of the data.
Therapeutic and toxic blood concentrations of nearly 1,000 drugs and other xenobiotics
2012-01-01
Introduction In order to assess the significance of drug levels measured in intensive care medicine, clinical and forensic toxicology, as well as for therapeutic drug monitoring, it is essential that a comprehensive collection of data is readily available. Therefore, it makes sense to offer a carefully referenced compilation of therapeutic and toxic plasma concentration ranges, as well as half-lives, of a large number of drugs and other xenobiotics for quick and comprehensive information. Methods Data have been abstracted from original papers and text books, as well as from previous compilations, and have been completed with data collected in our own forensic and clinical toxicology laboratory. The data presented in the table and corresponding annotations have been developed over the past 20 years and longer. A previous compilation has been completely revised and updated. In addition, more than 170 substances, especially drugs that have been introduced to the market since 2003 as well as illegal drugs, which became known to cause intoxications, were added. All data were carefully referenced and more than 200 new references were included. Moreover, the annotations providing details were completely revised and more than 100 annotations were added. Results For nearly 1,000 drugs and other xenobiotics, therapeutic ("normal") and, if data were available, toxic and comatose-fatal blood-plasma concentrations and elimination half-lives were compiled in a table. Conclusions In case of intoxications, the concentration of the ingested substances and/or metabolites in blood plasma better predicts the clinical severity of the case when compared to the assumed amount and time of ingestion. Comparing and contrasting the clinical case against the data provided, including the half-life, may support the decision for or against further intensive care. In addition, the data provided are useful for the therapeutic monitoring of pharmacotherapies, to facilitate the diagnostic assessment and monitoring of acute and chronic intoxications, and to support forensic and clinical expert opinions. PMID:22835221
Hydrogeologic and chemical data for the O-Field area, Aberdeen Proving Ground, Maryland
Nemoff, P.R.; Vroblesky, D.A.
1989-01-01
O-Field, located at the Edgewood area of Aberdeen Proving Ground , Maryland, was periodically used for disposal of munitions, waste chemicals, and chemical-warfare agents from World War II through the 1950' s. This report includes various physical, geologic, chemical, and hydrologic data obtained from well-core, groundwater, surface water, and bottom-sediment sampling sites at and near the O-Field disposal area. The data are presented in tables and hydrographs. Three site-location maps are also included. Well-core data include lithologic logs for 11 well- cluster sites, grain-size distributions, various chemical characteristics, and confining unit characteristics. Groundwater data include groundwater chemistry, method blanks for volatile organic carbon, available data on volatile and base/neutral organics, and compilation of corresponding method blanks, chemical-warfare agents, explosive-related products, radionuclides, herbicides, and groundwater levels. Surface-water data include field-measured characteristics; concentrations of various inorganic constituents including arsenic; selected organic constituents with method blanks; detection limits of organics; and a compilation of information on corresponding acids, volatiles, and semivolatiles. Bottom- sediment data include inorganic properties and constituents; organic chemistry; detection limits for organic chemicals; a compilation of information on acids, volatiles, and semivolatiles; and method blanks corresponding to acids, volatiles, and semivolatiles. A set of 15 water- level hydrographs for the period March 1986 through September 1987 also is included in the report. (USGS)
A new North American fire scar network for reconstructing historical pyrogeography, 1600-1900 AD
Donald A. Falk; Thomas Swetnam; Thomas Kitzberger; Elaine Sutherland; Peter Brown; Erica Bigio; Matthew Hall
2013-01-01
The Fire and Climate Synthesis (FACS) project is a collaboration of about 50 fire ecologists to compile and synthesize fire and climate data for western North America. We have compiled nearly 900 multi-century fire-scar based fire histories from the western United States, Canada, and Mexico. The resulting tree-ring based fire history is the largest and most spatially...
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
NASA Technical Reports Server (NTRS)
Borchardt, G. C.
1994-01-01
The Simple Tool for Automated Reasoning program (STAR) is an interactive, interpreted programming language for the development and operation of artificial intelligence (AI) application systems. STAR provides an environment for integrating traditional AI symbolic processing with functions and data structures defined in compiled languages such as C, FORTRAN and PASCAL. This type of integration occurs in a number of AI applications including interpretation of numerical sensor data, construction of intelligent user interfaces to existing compiled software packages, and coupling AI techniques with numerical simulation techniques and control systems software. The STAR language was created as part of an AI project for the evaluation of imaging spectrometer data at NASA's Jet Propulsion Laboratory. Programming in STAR is similar to other symbolic processing languages such as LISP and CLIP. STAR includes seven primitive data types and associated operations for the manipulation of these structures. A semantic network is used to organize data in STAR, with capabilities for inheritance of values and generation of side effects. The AI knowledge base of STAR can be a simple repository of records or it can be a highly interdependent association of implicit and explicit components. The symbolic processing environment of STAR may be extended by linking the interpreter with functions defined in conventional compiled languages. These external routines interact with STAR through function calls in either direction, and through the exchange of references to data structures. The hybrid knowledge base may thus be accessed and processed in general by either side of the application. STAR is initially used to link externally compiled routines and data structures. It is then invoked to interpret the STAR rules and symbolic structures. In a typical interactive session, the user enters an expression to be evaluated, STAR parses the input, evaluates the expression, performs any file input/output required, and displays the results. The STAR interpreter is written in the C language for interactive execution. It has been implemented on a VAX 11/780 computer operating under VMS, and the UNIX version has been implemented on a Sun Microsystems 2/170 workstation. STAR has a memory requirement of approximately 200K of 8 bit bytes, excluding externally compiled functions and application-dependent symbolic definitions. This program was developed in 1985.
From Sea to Sea: Canada's Three Oceans of Biodiversity
Archambault, Philippe; Snelgrove, Paul V. R.; Fisher, Jonathan A. D.; Gagnon, Jean-Marc; Garbary, David J.; Harvey, Michel; Kenchington, Ellen L.; Lesage, Véronique; Levesque, Mélanie; Lovejoy, Connie; Mackas, David L.; McKindsey, Christopher W.; Nelson, John R.; Pepin, Pierre; Piché, Laurence; Poulin, Michel
2010-01-01
Evaluating and understanding biodiversity in marine ecosystems are both necessary and challenging for conservation. This paper compiles and summarizes current knowledge of the diversity of marine taxa in Canada's three oceans while recognizing that this compilation is incomplete and will change in the future. That Canada has the longest coastline in the world and incorporates distinctly different biogeographic provinces and ecoregions (e.g., temperate through ice-covered areas) constrains this analysis. The taxonomic groups presented here include microbes, phytoplankton, macroalgae, zooplankton, benthic infauna, fishes, and marine mammals. The minimum number of species or taxa compiled here is 15,988 for the three Canadian oceans. However, this number clearly underestimates in several ways the total number of taxa present. First, there are significant gaps in the published literature. Second, the diversity of many habitats has not been compiled for all taxonomic groups (e.g., intertidal rocky shores, deep sea), and data compilations are based on short-term, directed research programs or longer-term monitoring activities with limited spatial resolution. Third, the biodiversity of large organisms is well known, but this is not true of smaller organisms. Finally, the greatest constraint on this summary is the willingness and capacity of those who collected the data to make it available to those interested in biodiversity meta-analyses. Confirmation of identities and intercomparison of studies are also constrained by the disturbing rate of decline in the number of taxonomists and systematists specializing on marine taxa in Canada. This decline is mostly the result of retirements of current specialists and to a lack of training and employment opportunities for new ones. Considering the difficulties encountered in compiling an overview of biogeographic data and the diversity of species or taxa in Canada's three oceans, this synthesis is intended to serve as a biodiversity baseline for a new program on marine biodiversity, the Canadian Healthy Ocean Network. A major effort needs to be undertaken to establish a complete baseline of Canadian marine biodiversity of all taxonomic groups, especially if we are to understand and conserve this part of Canada's natural heritage. PMID:20824204
From sea to sea: Canada's three oceans of biodiversity.
Archambault, Philippe; Snelgrove, Paul V R; Fisher, Jonathan A D; Gagnon, Jean-Marc; Garbary, David J; Harvey, Michel; Kenchington, Ellen L; Lesage, Véronique; Levesque, Mélanie; Lovejoy, Connie; Mackas, David L; McKindsey, Christopher W; Nelson, John R; Pepin, Pierre; Piché, Laurence; Poulin, Michel
2010-08-31
Evaluating and understanding biodiversity in marine ecosystems are both necessary and challenging for conservation. This paper compiles and summarizes current knowledge of the diversity of marine taxa in Canada's three oceans while recognizing that this compilation is incomplete and will change in the future. That Canada has the longest coastline in the world and incorporates distinctly different biogeographic provinces and ecoregions (e.g., temperate through ice-covered areas) constrains this analysis. The taxonomic groups presented here include microbes, phytoplankton, macroalgae, zooplankton, benthic infauna, fishes, and marine mammals. The minimum number of species or taxa compiled here is 15,988 for the three Canadian oceans. However, this number clearly underestimates in several ways the total number of taxa present. First, there are significant gaps in the published literature. Second, the diversity of many habitats has not been compiled for all taxonomic groups (e.g., intertidal rocky shores, deep sea), and data compilations are based on short-term, directed research programs or longer-term monitoring activities with limited spatial resolution. Third, the biodiversity of large organisms is well known, but this is not true of smaller organisms. Finally, the greatest constraint on this summary is the willingness and capacity of those who collected the data to make it available to those interested in biodiversity meta-analyses. Confirmation of identities and intercomparison of studies are also constrained by the disturbing rate of decline in the number of taxonomists and systematists specializing on marine taxa in Canada. This decline is mostly the result of retirements of current specialists and to a lack of training and employment opportunities for new ones. Considering the difficulties encountered in compiling an overview of biogeographic data and the diversity of species or taxa in Canada's three oceans, this synthesis is intended to serve as a biodiversity baseline for a new program on marine biodiversity, the Canadian Healthy Ocean Network. A major effort needs to be undertaken to establish a complete baseline of Canadian marine biodiversity of all taxonomic groups, especially if we are to understand and conserve this part of Canada's natural heritage.
DOT National Transportation Integrated Search
2014-05-01
Thefederallymandatedmaterialsclearanceprocessrequiresstatetransportation : agenciestosubjectallconstructionfieldsamplestoqualitycontrol/assurancetestingin : ordertopassstandardizedstateinspections....
Method and apparatus for data decoding and processing
Hunter, Timothy M.; Levy, Arthur J.
1992-01-01
A system and technique is disclosed for automatically controlling the decoding and digitizaiton of an analog tape. The system includes the use of a tape data format which includes a plurality of digital codes recorded on the analog tape in a predetermined proximity to a period of recorded analog data. The codes associated with each period of analog data include digital identification codes prior to the analog data, a start of data code coincident with the analog data recording, and an end of data code subsequent to the associated period of recorded analog data. The formatted tape is decoded in a processing and digitization system which includes an analog tape player coupled to a digitizer to transmit analog information from the recorded tape over at least one channel to the digitizer. At the same time, the tape player is coupled to a decoder and interface system which detects and decodes the digital codes on the tape corresponding to each period of recorded analog data and controls tape movement and digitizer initiation in response to preprogramed modes. A host computer is also coupled to the decoder and interface system and the digitizer and programmed to initiate specific modes of data decoding through the decoder and interface system including the automatic compilation and storage of digital identification information and digitized data for the period of recorded analog data corresponding to the digital identification data, compilation and storage of selected digitized data representing periods of recorded analog data, and compilation of digital identification information related to each of the periods of recorded analog data.
NASA Technical Reports Server (NTRS)
Carter, W. D. (Principal Investigator)
1973-01-01
The author has identified the following significant results. ERTS-1 data is ideally suited for small-scale geologic mapping and structural analysis of remote, inaccessible areas such as the Andes of South America. The synoptic view of large areas, low sun-angle and multispectral nature of the images provide the right ingredients for improving existing geologic and other maps of the regions. In most areas it has been possible to compile geologic, drainage, and cultural interpretive overlays to individual scenes mainly using MSS bands 4, 5, and 7. A test image mosaic using MSS band 6 is being compiled for Test Area 7 (La Paz, Bolivia). It will be at a scale of 1:1,000,000 and cover 4 x 6 degrees of latitude and longitude and will serve as a compilation base on which to join the overlays. Repetitive data shows changes in river channels and sedimentation plumes, changes in lake shorelines, and surface moisture distribution. Vegetation and snow line changes in the Andes have been recognized. A year of seasonal data, however, has not yet been acquired due to tape recorder failure.
Shah, Sachin D.; Maltby, David R.
2010-01-01
The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, compiled salinity-related water-quality data and information in a geodatabase containing more than 6,000 sampling sites. The geodatabase was designed as a tool for water-resource management and includes readily available digital data sources from the U.S. Geological Survey, U.S. Environmental Protection Agency, New Mexico Interstate Stream Commission, Sustainability of semi-Arid Hydrology and Riparian Areas, Paso del Norte Watershed Council, numerous other State and local databases, and selected databases maintained by the University of Arizona and New Mexico State University. Salinity information was compiled for an approximately 26,000-square-mile area of the Rio Grande Basin from the Rio Arriba-Sandoval County line, New Mexico, to Presidio, Texas. The geodatabase relates the spatial location of sampling sites with salinity-related water-quality data reported by multiple agencies. The sampling sites are stored in a geodatabase feature class; each site is linked by a relationship class to the corresponding sample and results stored in data tables.
Map and database of Quaternary faults in Venezuela and its offshore regions
Audemard, F.A.; Machette, M.N.; Cox, J.W.; Dart, R.L.; Haller, K.M.
2000-01-01
As part of the International Lithosphere Program’s “World Map of Major Active Faults,” the U.S. Geological Survey is assisting in the compilation of a series of digital maps of Quaternary faults and folds in Western Hemisphere countries. The maps show the locations, ages, and activity rates of major earthquake-related features such as faults and fault-related folds. They are accompanied by databases that describe these features and document current information on their activity in the Quaternary. The project is a key part of the Global Seismic Hazards Assessment Program (ILP Project II-0) for the International Decade for Natural Hazard Disaster Reduction.The project is sponsored by the International Lithosphere Program and funded by the USGS’s National Earthquake Hazards Reduction Program. The primary elements of the project are general supervision and interpretation of geologic/tectonic information, data compilation and entry for fault catalog, database design and management, and digitization and manipulation of data in †ARCINFO. For the compilation of data, we engaged experts in Quaternary faulting, neotectonics, paleoseismology, and seismology.
Aspects of Voyager photogrammetry
NASA Technical Reports Server (NTRS)
Wu, Sherman S. C.; Schafer, Francis J.; Jordan, Raymond; Howington, Annie-Elpis
1987-01-01
In January 1986, Voyager 2 took a series of pictures of Uranus and its satellites with the Imaging Science System (ISS) on board the spacecraft. Based on six stereo images from the ISS narrow-angle camera, a topographic map was compiled of the Southern Hemisphere of Miranda, one of Uranus' moons. Assuming a spherical figure, a 20-km surface relief is shown on the map. With three additional images from the ISS wide-angle camera, a control network of Miranda's Southern Hemisphere was established by analytical photogrammetry, producing 88 ground points for the control of multiple-model compilation on the AS-11AM analytical stereoplotter. Digital terrain data from the topographic map of Miranda have also been produced. By combining these data and the image data from the Voyager 2 mission, perspective views or even a movie of the mapped area can be made. The application of these newly developed techniques to Voyager 1 imagery, which includes a few overlapping pictures of Io and Ganymede, permits the compilation of contour maps or topographic profiles of these bodies on the analytical stereoplotters.
The cartography of Venus with Magellan data
NASA Technical Reports Server (NTRS)
Kirk, R. L.; Morgan, H. F.; Russell, J. F.
1993-01-01
Maps of Venus based on Magellan data are being compiled at 1:50,000,000, 1:5,000,000 and 1:1,500,000 scales. Topographic contour lines based on radar altimetry data are overprinted on the image maps, along with feature nomenclature. Map controls are based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control for bodies where framing cameras were used, is not feasible with the radar images of Venus. Preliminary synthetic aperture radar (SAR) image maps have some data gaps and cosmetic inconsistencies, which will be corrected on final compilations. Eventual revision of geodetic controls and of the adopted Venusian spin-axis location will result in geometric adjustments, particularly on large-scale maps.
Palmquist, Emily C.; Ralston, Barbara E.; Sarr. Daniel,; Merritt, David; Shafroth, Patrick B; Scott, Julian
2017-01-01
Trait-based approaches to vegetation analyses are becoming more prevalent in studies of riparian vegetation dynamics, including responses to flow regulation, groundwater pumping, and climate change. These analyses require species trait data compiled from the literature and floras or original field measurements. Gathering such data makes trait-based research time intensive at best and impracticable in some cases. To support trait-based analysis of vegetation along the Colorado River through Grand Canyon, a data set of 20 biological traits and ecological affinities for 179 species occurring in that study area was compiled. This diverse flora shares species with many riparian areas in the western USA and includes species that occur across a wide moisture gradient. Data were compiled from published scientific papers, unpublished reports, plant fact sheets, existing trait databases, regional floras, and plant guides. Data for ordinal environmental tolerances were more readily available than were quantitative traits. More publicly available data are needed for traits of both common and rare southwestern U.S. plant species to facilitate comprehensive, trait-based research. The trait data set is free to use and can be downloaded from ScienceBase: https://www.sciencebase.gov/catalog/item/58af41dee4b01ccd54f9f2ff and https://dx.doi.org/10.5066/F7QV3JN1
Guidelines for preparation of State water-use estimates for 2005
Hutson, Susan S.
2007-01-01
The U.S. Geological Survey (USGS) has estimated the use of water in the United States at 5-year intervals since 1950. This report describes the water-use categories and data elements required for the 2005 national water-use compilation conducted as part of the USGS National Water Use Information Program. The report identifies sources of water-use information, provides standard methods and techniques for estimating water use at the county level, and outlines steps for preparing documentation for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. As part of this USGS program to document water use on a national scale for the year 2005, estimates of water withdrawals for the categories of public supply, self-supplied domestic, industrial, irrigation, and thermoelectric power at the county level are prepared for each State using the guidelines in this report. Estimates of water withdrawals for aquaculture, livestock, and mining are prepared for each State using a county-based national model, although study chiefs in each State have the option of producing independent county estimates of water withdrawals for these categories. Estimates of deliveries of water from public supplies for domestic use by county also will be prepared for each State for 2005. As a result, domestic water use can be determined for each State by combining self-supplied domestic withdrawals and publicly supplied domestic deliveries. Fresh ground-water and surfacewater estimates will be prepared for all categories of use; and saline ground-water and surface-water estimates by county will be prepared for the categories of public supply, industrial, and thermoelectric power. Power production for thermoelectric power will be compiled for 2005. If data are available, reclaimed wastewater use will be compiled for the industrial and irrigation categories. Optional water-use categories are commercial, hydroelectric power, and wastewater treatment. Optional data elements are public-supply deliveries to commercial, industrial, and thermoelectric-power users; consumptive use; irrigation conveyance loss; and number of facilities. Aggregation of water-use data by eight-digit hydrologic cataloging unit and by principal aquifer also is optional. Water-use data compiled by the States will be stored in the USGS Aggregate Water-Use Data System (AWUDS). This database is a comprehensive aggregated database designed to store both mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and produces tables of wateruse data compiled for 1985, 1990, 1995, and 2000.
VizieR Online Data Catalog: Rotation measures of radio point sources (Xu+, 2014)
NASA Astrophysics Data System (ADS)
Xu, J.; Han, J.-L.
2015-04-01
We compiled a catalog of Faraday rotation measures (RMs) for 4553 extragalactic radio point sources published in literature. These RMs were derived from multi-frequency polarization observations. The RM data are compared to those in the NRAO VLA Sky Survey (NVSS) RM catalog. We reveal a systematic uncertainty of about 10.0+/-1.5rad/m2 in the NVSS RM catalog. The Galactic foreground RM is calculated through a weighted averaging method by using the compiled RM catalog together with the NVSS RM catalog, with careful consideration of uncertainties in the RM data. The data from the catalog and the interface for the Galactic foreground RM calculations are publicly available on the webpage: http://zmtt.bao.ac.cn/RM/ . (2 data files).
Aquatic toxicity information retrieval data base (AQUIRE). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The purpose of AQUIRE is to provide scientists and managers quick access to a comprehensive, systematic, computerized compilation of aquatic toxicity data. Scientific papers published both nationally and internationally on the toxicity of chemicals to aquatic organisms and plants are collected and reviewed for AQUIRE. Independently compiled data files that meet AQUIRE parameter and quality assurance criteria are also included. Relevant toxicity test results and related test information for any individual chemicals analyzed using freshwater and marine organisms in laboratory and field conditions, are included in the database. During 1992 and early 1993, nine data updates were made to themore » AQUIRE system. AQUIRE now contains 109,338 individual aquatic toxicity test results for 5,159 chemicals, 2,429 organisms, and over 160 endpoints reviewed from 7,517 publications.« less
du Bray, Edward A.; Day, Warren C.; Meighan, Corey J.
2018-04-16
The purpose of this report is to present recently acquired as well as previously published geochemical and modal petrographic data for igneous rocks in the St. Francois Mountains, southeast Missouri, as part of an ongoing effort to understand the regional geology and ore deposits of the Mesoproterozoic basement rocks of southeast Missouri, USA. The report includes geochemical data that is (1) newly acquired by the U.S. Geological Survey and (2) compiled from numerous sources published during the last fifty-five years. These data are required for ongoing petrogenetic investigations of these rocks. Voluminous Mesoproterozoic igneous rocks in the St. Francois Mountains of southeast Missouri constitute the basement buried beneath Paleozoic sedimentary rock that is over 600 meters thick in places. The Mesoproterozoic rocks of southeast Missouri represent a significant component of approximately 1.4 billion-year-old (Ga) igneous rocks that crop out extensively in North America along the southeast margin of Laurentia and subsequent researchers suggested that iron oxide-copper deposits in the St. Francois Mountains are genetically associated with ca. 1.4 Ga magmatism in this region. The geochemical and modal data sets described herein were compiled to support investigations concerning the tectonic setting and petrologic processes responsible for the associated magmatism.
A global compilation of coral sea-level benchmarks: Implications and new challenges
NASA Astrophysics Data System (ADS)
Medina-Elizalde, Martín
2013-01-01
I present a quality-controlled compilation of sea-level data from U-Th dated corals, encompassing 30 studies of 13 locations around the world. The compilation contains relative sea level (RSL) data from each location based on both conventional and open-system U-Th ages. I have applied a commonly used age quality control criterion based on the initial 234U/238U activity ratios of corals in order to select reliable ages and to reconstruct sea level histories for the last 150,000 yr. This analysis reveals scatter of RSL estimates among coeval coral benchmarks both within individual locations and between locations, particularly during Marine Isotope Stage (MIS) 5a and the glacial inception following the last interglacial. The character of data scatter during these time intervals imply that uncertainties still exist regarding tectonics, glacio-isostacy, U-series dating, and/or coral position. To elucidate robust underlying patterns, with confidence limits, I performed a Monte Carlo-style statistical analysis of the compiled coral data considering appropriate age and sea-level uncertainties. By its nature, such an analysis has the tendency to smooth/obscure millennial-scale (and finer) details that may be important in individual datasets, and favour the major underlying patterns that are supported by all datasets. This statistical analysis is thus functional to illustrate major trends that are statistically robust ('what we know'), trends that are suggested but still are supported by few data ('what we might know, subject to addition of more supporting data and improved corrections'), and which patterns/data are clear outliers ('unlikely to be realistic given the rest of the global data and possibly needing further adjustments'). Prior to the last glacial maximum and with the possible exception of the 130-120 ka period, available coral data generally have insufficient temporal resolution and unexplained scatter, which hinders identification of a well-defined pattern with usefully narrow confidence limits. This analysis thus provides a framework that objectively identifies critical targets for new data collection, improved corrections, and integration of coral data with independent, stratigraphically continuous methods of sea-level reconstruction.
Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.
2013-01-01
SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-01-17
This library is an implementation of the Sparse Approximate Matrix Multiplication (SpAMM) algorithm introduced. It provides a matrix data type, and an approximate matrix product, which exhibits linear scaling computational complexity for matrices with decay. The product error and the performance of the multiply can be tuned by choosing an appropriate tolerance. The library can be compiled for serial execution or parallel execution on shared memory systems with an OpenMP capable compiler
Valve technology: A compilation
NASA Technical Reports Server (NTRS)
1971-01-01
A technical compilation on the types, applications and modifications to certain valves is presented. Data cover the following: (1) valves that feature automatic response to stimuli (thermal, electrical, fluid pressure, etc.), (2) modified valves changed by redesign of components to increase initial design effectiveness or give the item versatility beyond its basic design capability, and (3) special purpose valves with limited application as presented, but lending themselves to other uses with minor changes.
Geochronology Database for Central Colorado
Klein, T.L.; Evans, K.V.; deWitt, E.H.
2010-01-01
This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.
High-Performance Design Patterns for Modern Fortran
Haveraaen, Magne; Morris, Karla; Rouson, Damian; ...
2015-01-01
This paper presents ideas for using coordinate-free numerics in modern Fortran to achieve code flexibility in the partial differential equation (PDE) domain. We also show how Fortran, over the last few decades, has changed to become a language well-suited for state-of-the-art software development. Fortran’s new coarray distributed data structure, the language’s class mechanism, and its side-effect-free, pure procedure capability provide the scaffolding on which we implement HPC software. These features empower compilers to organize parallel computations with efficient communication. We present some programming patterns that support asynchronous evaluation of expressions comprised of parallel operations on distributed data. We implemented thesemore » patterns using coarrays and the message passing interface (MPI). We compared the codes’ complexity and performance. The MPI code is much more complex and depends on external libraries. The MPI code on Cray hardware using the Cray compiler is 1.5–2 times faster than the coarray code on the same hardware. The Intel compiler implements coarrays atop Intel’s MPI library with the result apparently being 2–2.5 times slower than manually coded MPI despite exhibiting nearly linear scaling efficiency. As compilers mature and further improvements to coarrays comes in Fortran 2015, we expect this performance gap to narrow.« less
Aquatic Toxicity Information Retrieval Data Base (ACQUIRE). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The purpose of Acquire is to provide scientists and managers quick access to a comprehensive, systematic, computerized compilation of aquatic toxicity data. Scientific papers published both nationally and internationally on the toxicity of chemicals to aquatic organisms and plants are collected and reviewed for ACQUIRE. Independently compiled data files that meet ACQUIRE parameter and quality assurance criteria are also included. Selected toxicity test results and related testing information for any individual chemical from laboratory and field aquatic toxicity effects are included for tests with freshwater and marine organisms. The total number of data records in ACQUIRE is now over 105,300.more » This includes data from 6000 references, for 5200 chemicals and 2400 test species. A major data file, Acute Toxicity of Organic Chemicals (ATOC), has been incorporated into ACQUIRE. The ATOC file contains laboratory acute test data on 525 organic chemicals using juvenile fathead minnows.« less
7 CFR 989.75 - Confidential information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Secretary, information and data of a general nature, compilations of data affecting handlers as a group, and any data affecting one or more handlers, so long as the identity of the individual handlers involved...
7 CFR 989.75 - Confidential information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Secretary, information and data of a general nature, compilations of data affecting handlers as a group, and any data affecting one or more handlers, so long as the identity of the individual handlers involved...
7 CFR 989.75 - Confidential information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Secretary, information and data of a general nature, compilations of data affecting handlers as a group, and any data affecting one or more handlers, so long as the identity of the individual handlers involved...
7 CFR 989.75 - Confidential information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Secretary, information and data of a general nature, compilations of data affecting handlers as a group, and any data affecting one or more handlers, so long as the identity of the individual handlers involved...
NASA Astrophysics Data System (ADS)
Melle, W.; Runge, J. A.; Head, E.; Plourde, S.; Castellani, C.; Licandro, P.; Pierson, J.; Jónasdóttir, S. H.; Johnson, C.; Broms, C.; Debes, H.; Falkenhaug, T.; Gaard, E.; Gislason, A.; Heath, M. R.; Niehoff, B.; Nielsen, T. G.; Pepin, P.; Stenevik, E. K.; Chust, G.
2015-08-01
Here we present a new, pan-North-Atlantic compilation of data on key mesozooplankton species, including the most important copepod, Calanus finmarchicus. Distributional data of eight representative zooplankton taxa, from recent (2000-2009) Continuous Plankton Recorder data, are presented, along with basin-scale data of the phytoplankton colour index. Then we present a compilation of data on C. finmarchicus, including observations of abundance, demography, egg production and female size, with accompanying data on temperature and chlorophyll. This is a contribution by Canadian, European and US scientists and their institutions: http://doi.pangaea.de/10.1594/PANGAEA.820732, http://doi.pangaea.de/10.1594/PANGAEA.824423, http://doi.pangaea.de/10.1594/PANGAEA.828393 (please also see Melle et al., 2013; Castellani and Licandro, 2013; Jónasdóttir et al., 2014).
Previous experience in manned space flight: A survey of human factors lessons learned
NASA Technical Reports Server (NTRS)
Chandlee, George O.; Woolford, Barbara
1993-01-01
Previous experience in manned space flight programs can be used to compile a data base of human factors lessons learned for the purpose of developing aids in the future design of inhabited spacecraft. The objectives are to gather information available from relevant sources, to develop a taxonomy of human factors data, and to produce a data base that can be used in the future for those people involved in the design of manned spacecraft operations. A study is currently underway at the Johnson Space Center with the objective of compiling, classifying, and summarizing relevant human factors data bearing on the lessons learned from previous manned space flights. The research reported defines sources of data, methods for collection, and proposes a classification for human factors data that may be a model for other human factors disciplines.
ADMAP-2: The next-generation Antarctic magnetic anomaly map
NASA Astrophysics Data System (ADS)
Golynsky, Alexander; Golynsky, Dmitry; Ferraccioli, Fausto; Jordan, Tom; Damaske, Detlef; Blankenship, Don; Holt, Jack; Young, Duncan; Ivanov, Sergey; Kiselev, Alexander; Jokat, Wilfried; Gohl, Karsten; Eagles, Graeme; Bell, Robin; Armadillo, Egidio; Bozzo, Emanuelle; Caneva, Giorgio; Finn, Carol; Forsberg, Rene; Aitken, Alan
2017-04-01
The Antarctic Digital Magnetic Anomaly Project compiled the first international magnetic anomaly map of the Antarctic region south of 60°S (ADMAP-1) some six years after its 1995 launch (Golynsky et al., 2001; Golynsky et al., 2007; von Frese et al., 2007). This magnetic anomaly compilation provided new insights into the structure and evolution of Antarctica, including its Proterozoic-Archaean cratons, Proterozoic-Palaeozoic orogens, Palaeozoic-Cenozoic magmatic arc systems, continental rift systems and rifted margins, large igneous provinces and the surrounding oceanic gateways. The international working group produced the ADMAP-1 database from more than 1.5 million line-kilometres of terrestrial, airborne, marine and satellite magnetic observations collected during the IGY 1957-58 through 1999. Since the publication of the first magnetic anomaly map, the international geomagnetic community has acquired more than 1.9 million line-km of new airborne and marine data. This implies that the amount of magnetic anomaly data over the Antarctic continent has more than doubled. These new data provide important constraints on the geology of the enigmatic Gamburtsev Subglacial Mountains and Prince Charles Mountains, Wilkes Land, Dronning Maud Land, and other largely unexplored Antarctic areas (Ferraccioli et al., 2011, Aitken et al., 2014¸ Mieth & Jokat, 2014, Golynsky et al., 2013). The processing of the recently acquired data involved quality assessments by careful statistical analysis of the crossover errors. All magnetic data used in the ADMAP-2 compilation were delivered as profiles, although several of them were in raw form. Some datasets were decimated or upward continued to altitudes of 4 km or higher with the higher frequency geological signals smoothed out. The line data used for the ADMAP-1 compilation were reprocessed for obvious errors and residual corrugations. The new near-surface magnetic data were corrected for the international geomagnetic reference field and diurnal effects, edited for high-frequency errors, and levelled to minimize line-correlated noise. The magnetic anomaly data collected mainly in the 21-st century clearly cannot be simply stitched together with the previous surveys. Thus, mutual levelling adjustments were required to accommodate overlaps in these surveys. The final compilation merged all the available aeromagnetic and marine grids to create the new composite grid of the Antarctic with minimal mismatch along the boundaries between the datasets. Regional coverage gaps in the composite grid will be filled with anomaly estimates constrained by both the near-surface data and satellite magnetic observations taken mainly from the CHAMP and Swarm missions. Magnetic data compilations are providing tantalizing new views into regional-scale subglacial geology and crustal architecture in interior of East and West Antarctica. The ADMAP-2 map provides a new geophysical foundation to better understand the geological structure and tectonic history of Antarctica and surrounding marine areas. In particular, it will provide improved constraints on the lithospheric transition of Antarctica to its oceanic basins, and thus enable improved interpretation of the geodynamic evolution of the Antarctic lithosphere that was a key component in the assembly and break-up of the Rodinia and Gondwana supercontinents. This work was supported by the Korea Polar Research Institute.
A compilation of quantitative functional traits for marine and freshwater crustacean zooplankton.
Hébert, Marie-Pier; Beisner, Beatrix E; Maranger, Roxane
2016-04-01
This data compilation synthesizes 8609 individual observations and ranges of 13 traits from 201 freshwater and 191 marine crustacean taxa belonging to either Copepoda or Cladocera, two important zooplankton groups across all major aquatic habitats. Most data were gathered from the literature, with the balance being provided by zooplankton ecologists. With the aim of more fully assessing zooplankton effects on elemental processes such as nitrogen (N), phosphorus (P) and carbon (C) stocks and fluxes in aquatic ecosystems, this data set provides information on the following traits: body size (length and mass), trophic group, elemental and biochemical corporal composition (N, P, C, lipid and protein content), respiration rates, N- and P-excretion rates, as well as stoichiometric ratios. Although relationships for zooplankton metabolism as a function of body mass or requirements have been explored in the past three decades, data have not been systematically compiled nor examined from an integrative and large-scale perspective across crustacean taxa and habitat types. While this contribution likely represents the most comprehensive assembly of traits for both marine and freshwater species, this data set is not exhaustive either. As a result, this compilation also identifies knowledge gaps: a fact that should encourage researchers to disclose information they may have to help complete such databases. This trait matrix is made available for the first time in this data paper; prior to its release, the data set has been analyzed in a meta-analysis published as a companion paper. This data set should prove extremely valuable for aquatic ecologists for trait-based characterization of plankton community structure as well as biogeochemical modeling. These data are also well-suited for deriving shortcut relationships that predict more difficult to measure trait values, most of which can be directly related to ecosystem properties (i.e., effect traits), from simpler traits (e.g., body size), and for exploring patterns of trait variation within and amongst taxonomic units or ecosystem types. Overall, this data set is likely to provide new insights into the functional structure of zooplankton communities and increase our mechanistic understanding of the influence of these pivotal organisms on aquatic ecosystems. © 2016 by the Ecological Society of America.
Radiation data definitions and compilation for equipment qualification data bank
NASA Technical Reports Server (NTRS)
Bouquet, F. L.; Winslow, J. W.
1986-01-01
Dose definitions, physical properties, mechanical properties, electrical properties, and particle definitions are listed for insulators and dielectrics, elastomeric seals and gaskets, lubricants, adhesives, and coatings.
Transportation networks : data, analysis, methodology development and visualization.
DOT National Transportation Integrated Search
2007-12-29
This project provides data compilation, analysis methodology and visualization methodology for the current network : data assets of the Alabama Department of Transportation (ALDOT). This study finds that ALDOT is faced with a : considerable number of...
Applications and use of transportation data
DOT National Transportation Integrated Search
1979-01-01
This research paper is a compilation of seven documents which focus on the application and use of transportation data. The following papers are included: "Field data collection and sampling procedures for measuring regional vehicle classification and...
Gazan, Rozenn; Barré, Tangui; Perignon, Marlène; Maillot, Matthieu; Darmon, Nicole; Vieux, Florent
2018-01-01
The holistic approach required to assess diet sustainability is hindered by lack of comprehensive databases compiling relevant food metrics. Those metrics are generally scattered in different data sources with various levels of aggregation hampering their matching. The objective was to develop a general methodology to compile food metrics describing diet sustainability dimensions into a single database and to apply it to the French context. Each step of the methodology is detailed: indicators and food metrics identification and selection, food list definition, food matching and values assignment. For the French case, nutrient and contaminant content, bioavailability factors, distribution of dietary intakes, portion sizes, food prices, greenhouse gas emission, acidification and marine eutrophication estimates were allocated to 212 commonly consumed generic foods. This generic database compiling 279 metrics will allow the simultaneous evaluation of the four dimensions of diet sustainability, namely health, economic, social and environmental, dimensions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Compiling software for a hierarchical distributed processing system
Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E
2013-12-31
Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.
Program package for multicanonical simulations of U(1) lattice gauge theory-Second version
NASA Astrophysics Data System (ADS)
Bazavov, Alexei; Berg, Bernd A.
2013-03-01
A new version STMCMUCA_V1_1 of our program package is available. It eliminates compatibility problems of our Fortran 77 code, originally developed for the g77 compiler, with Fortran 90 and 95 compilers. New version program summaryProgram title: STMC_U1MUCA_v1_1 Catalogue identifier: AEET_v1_1 Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html Programming language: Fortran 77 compatible with Fortran 90 and 95 Computers: Any capable of compiling and executing Fortran code Operating systems: Any capable of compiling and executing Fortran code RAM: 10 MB and up depending on lattice size used No. of lines in distributed program, including test data, etc.: 15059 No. of bytes in distributed program, including test data, etc.: 215733 Keywords: Markov chain Monte Carlo, multicanonical, Wang-Landau recursion, Fortran, lattice gauge theory, U(1) gauge group, phase transitions of continuous systems Classification: 11.5 Catalogue identifier of previous version: AEET_v1_0 Journal Reference of previous version: Computer Physics Communications 180 (2009) 2339-2347 Does the new version supersede the previous version?: Yes Nature of problem: Efficient Markov chain Monte Carlo simulation of U(1) lattice gauge theory (or other continuous systems) close to its phase transition. Measurements and analysis of the action per plaquette, the specific heat, Polyakov loops and their structure factors. Solution method: Multicanonical simulations with an initial Wang-Landau recursion to determine suitable weight factors. Reweighting to physical values using logarithmic coding and calculating jackknife error bars. Reasons for the new version: The previous version was developed for the g77 compiler Fortran 77 version. Compiler errors were encountered with Fortran 90 and Fortran 95 compilers (specified below). Summary of revisions: epsilon=one/10**10 is replaced by epsilon/10.0D10 in the parameter statements of the subroutines u1_bmha.f, u1_mucabmha.f, u1wl_backup.f, u1wlread_backup.f of the folder Libs/U1_par. For the tested compilers script files are added in the folder ExampleRuns and readme.txt files are now provided in all subfolders of ExampleRuns. The gnuplot driver files produced by the routine hist_gnu.f of Libs/Fortran are adapted to syntax required by gnuplot version 4.0 and higher. Restrictions: Due to the use of explicit real*8 initialization the conversion into real*4 will require extra changes besides replacing the implicit.sta file by its real*4 version. Unusual features: The programs have to be compiled the script files like those contained in the folder ExampleRuns as explained in the original paper. Running time: The prepared test runs took up to 74 minutes to execute on a 2 GHz PC.
The anomaly data base of screwworm information
NASA Technical Reports Server (NTRS)
Giddings, L. E.
1976-01-01
Standard statistical processing of anomaly data in the screwworm eradication data system is possible from data compiled on magnetic tapes with the Univac 1108 computer. The format and organization of the data in the data base, which is also available on dedicated disc storage, are described.
ESIP Earth Sciences Data Analytics (ESDA) Cluster - Work in Progress
NASA Technical Reports Server (NTRS)
Kempler, Steven
2015-01-01
The purpose of this poster is to promote a common understanding of the usefulness of, and activities that pertain to, Data Analytics and more broadly, the Data Scientist; Facilitate collaborations to better understand the cross usage of heterogeneous datasets and to provide accommodating data analytics expertise, now and as the needs evolve into the future; Identify gaps that, once filled, will further collaborative activities. Objectives Provide a forum for Academic discussions that provides ESIP members a better understanding of the various aspects of Earth Science Data Analytics Bring in guest speakers to describe external efforts, and further teach us about the broader use of Data Analytics. Perform activities that:- Compile use cases generated from specific community needs to cross analyze heterogeneous data- Compile sources of analytics tools, in particular, to satisfy the needs of the above data users- Examine gaps between needs and sources- Examine gaps between needs and community expertise- Document specific data analytics expertise needed to perform Earth science data analytics Seek graduate data analytics Data Science student internship opportunities.
Trippi, Michael H.; Belkin, Harvey E.
2015-09-10
Geographic information system (GIS) information may facilitate energy studies, which in turn provide input for energy policy decisions. The U.S. Geological Survey (USGS) has compiled GIS data representing coal mines, deposits (including those with and without coal mines), occurrences, areas, basins, and provinces of Mongolia as of 2009. These data are now available for download, and may be used in a GIS for a variety of energy resource and environmental studies of Mongolia. Chemical data for 37 coal samples from a previous USGS study of Mongolia (Tewalt and others, 2010) are included in a downloadable GIS point shapefile and shown on the map of Mongolia. A brief report summarizes the methodology used for creation of the shapefiles and the chemical analyses run on the samples.
Michigan Magnetic and Gravity Maps and Data: A Website for the Distribution of Data
Daniels, David L.; Kucks, Robert P.; Hill, Patricia L.; Snyder, Stephen L.
2009-01-01
This web site provides the best available, public-domain, aeromagnetic and gravity data in the State of Michigan and merges these data into composite grids that are available for downloading. The magnetic grid is compiled from 25 separate magnetic surveys that have been knit together to form a single composite digital grid and map. The magnetic survey grids have been continued to 305 meters (1,000 feet) above ground and merged together to form the State compilation. A separate map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. In addition, a complete Bouguer gravity anomaly grid and map were generated from more than 20,000 gravity station measurements from 33 surveys. A table provides the facts about each gravity survey where known.
Physical properties of alternatives to the fully halogenated chlorofluorocarbons
NASA Technical Reports Server (NTRS)
Mclinden, Mark O.
1990-01-01
Presented here are recommended values and correlations of selected physical properties of several alternatives to the fully halogenated chlorocarbons. The quality of the data used in this compilation varies widely, ranging from well-documented, high accuracy measurements from published sources to completely undocumented values listed on anonymous data sheets. That some of the properties for some fluids are available only from the latter type of source is clearly not the desired state of affairs. While some would reject all such data, the compilation given here is presented in the spirit of laying out the present state of knowledge and making available a set of data in a timely manner, even though its quality is sometimes uncertain. The correlations presented here are certain to change quickly as additional information becomes available.
NASA Astrophysics Data System (ADS)
Xu, Y.; Takahashi, K.; Goriely, S.; Arnould, M.; Ohta, M.; Utsunomiya, H.
2013-11-01
An update of the NACRE compilation [3] is presented. This new compilation, referred to as NACRE II, reports thermonuclear reaction rates for 34 charged-particle induced, two-body exoergic reactions on nuclides with mass number A<16, of which fifteen are particle-transfer reactions and the rest radiative capture reactions. When compared with NACRE, NACRE II features in particular (1) the addition to the experimental data collected in NACRE of those reported later, preferentially in the major journals of the field by early 2013, and (2) the adoption of potential models as the primary tool for extrapolation to very low energies of astrophysical S-factors, with a systematic evaluation of uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-02-01
The Electric Power Research Institute (EPRI) has been studying the feasibility of a Low Salinity Hydrothermal Demonstration Plant as part of its Geothermal Energy Program. The Heber area of the Imperial Valley was selected as one of the candidate geothermal reservoirs. Documentation of the environmental conditions presently existing in the Heber area is required for assessment of environmental impacts of future development. An environmental baseline data acquisition program to compile available data on the environment of the Heber area is reported. The program included a review of pertinent existing literature, interviews with academic, governmental and private entities, combined with fieldmore » investigations and meteorological monitoring to collect primary data. Results of the data acquisition program are compiled in terms of three elements: the physical, the biological and socioeconomic settings.« less
Sizing the science data processing requirements for EOS
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi
1991-01-01
The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.
National Geothermal Data System State Contributions by Data Type (Appendix A1-b)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Love, Diane
Multipaged spreadsheet listing an inventory of data submissions to the State contributions to the National Geothermal Data System project by services, by state, by metadata compilations, metadata, and map count, including a summary of information.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-15
... product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Stock Assessment...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-21
... Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment Process is a stock...
Urban Data Book : Volume 1. Urban Data - Atlanta-Miami
DOT National Transportation Integrated Search
1975-11-01
A quick reference compilation of certain population, socio-economic, employment, and modal split characteristics of the 35 largest Standard Metropolitan Statistical Areas (SMSA) in the United States is presented. The three basic groups or urban data ...
Compilation of Steady State Automotive Engine Test Data
DOT National Transportation Integrated Search
1978-09-01
Experimental data were obtained in dynamometer tests of automotive engines used in the United States. The objective of this program is to obtain engine performance data for determining fuel consumption and emissions (carbon monoxide, hydrocarbons, an...
Pesticide Data Submitters List (PDSL)
The Pesticide Data Submitters List is a compilation of names and addresses of registrants who wish to be notified and offered compensation for use of their data. It was developed to assist pesticide applicants in fulfilling their obligation under FIFRA.
IUPAC-NIST Solubility Data Series. 95. Alkaline Earth Carbonates in Aqueous Systems. Part 2. Ca
NASA Astrophysics Data System (ADS)
Vanderdeelen, Jan
2012-06-01
The alkaline earth carbonates are an important class of minerals. This article is part of a volume in the IUPAC-NIST Solubility Data Series that compiles and critically evaluates solubility data of the alkaline earth carbonates in water and in simple aqueous electrolyte solutions. Part 1 outlined the procedure adopted in this volume, and presented the beryllium and magnesium carbonates. Part 2, the current paper, compiles and critically evaluates the solubility data of calcium carbonate. The chemical forms included are the anhydrous CaCO3 types calcite, aragonite, and vaterite, the monohydrate monohydrocalcite (CaCO3. H2O), the hexahydrate ikaite (CaCO3.6H2O), and an amorphous form. The data were analyzed with two model variants, and thermodynamic data of each form consistent with each of the models and with the CODATA key values for thermodynamics are presented.
Sousa, Filipa L; Parente, Daniel J; Hessman, Jacob A; Chazelle, Allen; Teichmann, Sarah A; Swint-Kruse, Liskin
2016-09-01
The AlloRep database (www.AlloRep.org) (Sousa et al., 2016) [1] compiles extensive sequence, mutagenesis, and structural information for the LacI/GalR family of transcription regulators. Sequence alignments are presented for >3000 proteins in 45 paralog subfamilies and as a subsampled alignment of the whole family. Phenotypic and biochemical data on almost 6000 mutants have been compiled from an exhaustive search of the literature; citations for these data are included herein. These data include information about oligomerization state, stability, DNA binding and allosteric regulation. Protein structural data for 65 proteins are presented as easily-accessible, residue-contact networks. Finally, this article includes example queries to enable the use of the AlloRep database. See the related article, "AlloRep: a repository of sequence, structural and mutagenesis data for the LacI/GalR transcription regulators" (Sousa et al., 2016) [1].
Celescope catalog of ultraviolet stellar observations
NASA Technical Reports Server (NTRS)
Davis, R. J.; Deutschman, W. A.; Haramundanis, K. L.
1973-01-01
The catalog contains the observational results obtained by the Celescope Experiment during the first 16 months of operation of NASA's Orbiting Astronomical Observatory (OAO-2). It lists the results of the stellar observations, along with selected ground-based information obtained from the available literature. Lunar observations (Ahmad and Deutschman, 1972), as well as other analyses of the data, are being published as separate papers. These data are available in two forms: (1) magnetic tapes and the necessary utility programs for reading and printing the contents of the tapes; and (2) this catalog, transcribed from the magnetic-tape catalog. The magnetic tape version contains not only the compiled results but also the results of the individual observations from which these averaged data were compiled.
Contaminated sediments database for the Gulf of Maine
Buchholtz ten Brink, Marilyn R.; Manheim, F.T.; Mecray, E.L.; Hastings, M.E.; Currence, J.M.; Farrington, J.W.; Jones, S.H.; Larsen, P.F.; Tripp, B.W.; Wallace, G.T.; Ward, L.G.; Fredette, T.J.; Liebman, M.L.; Smith Leo, W.
2002-01-01
Bottom sediments in the Gulf of Maine and its estuaries have accumulated pollutants of many types, including metals and organic compounds of agricultural, industrial, and household derivation. Much analytical and descriptive data has been obtained on these sediments over the past decades, but only a small effort had been made, prior to this project, to compile and edit the published and unpublished data in forms suitable for a variety of users. The Contaminated Sediments Database for the Gulf of Maine provides a compilation and synthesis of existing data to help establish the environmental status of our coastal sediments and the transport paths and fate of contaminants in this region. This information, in turn, forms one of the essential bases for developing successful remediation and resource management policies.
NASA Technical Reports Server (NTRS)
Campbell, M. E.; Thompson, M. B.
1972-01-01
This handbook provides a ready reference for many of the solid and liquid lubricants used in the space industry. Lubricants and lubricant properties are arranged systematically so that designers, engineers, and maintenance personnel in the space industry can conveniently locate data needed for their work. The handbook is divided into two major parts. Part A is a compilation of chemical and physical property data of more than 250 solid lubricants, bonded solid lubricants, dispersions and composites. Part B is a compilation of chemical and physical property data of more than 250 liquid lubricants, greases, oils, compounds and fluids. The listed materials cover a broad spectrum, from manufacturing and ground support to hardware applications for missiles and spacecraft.
Selected historic agricultural data important to environmental quality in the United States
Grey, Katia M.; Capel, Paul D.; Baker, Nancy T.; Thelin, Gail P.
2012-01-01
This report and the accompanying tables summarize some of the important changes in American agriculture in the form of a timeline and a compilation of selected annual time-series data that can be broadly related to environmental quality. Although these changes have been beneficial for increasing agricultural production, some of them have resulted in environmental concerns. The agriculture timeline is divided into four categories (1) crop and animal changes, (2) mechanical changes, (3) biological and chemical changes, and (4) regulatory and societal changes. The timeline attempts to compile events that have had a lasting impact on agriculture in the United States. The events and data presented in this report may help to improve the connections between agricultural activist and environmental concerns.
NASA Technical Reports Server (NTRS)
Husson, N.; Barbe, A.; Brown, L. R.; Carli, B.; Goldman, A.; Pickett, H. M.; Roche, A. E.; Rothman, L. S.; Smith, M. A. H.
1985-01-01
Several aspects of quantitative atmospheric spectroscopy are considered, using a classification of the molecules according to the gas amounts in the stratosphere and upper troposphere, and reviews of quantitative atmospheric high-resolution spectroscopic measurements and field measurements systems are given. Laboratory spectroscopy and spectral analysis and prediction are presented with a summary of current laboratory spectroscopy capabilities. Spectroscopic data requirements for accurate derivation of atmospheric composition are discussed, where examples are given for space-based remote sensing experiments of the atmosphere: the ATMOS (Atmospheric Trace Molecule) and UARS (Upper Atmosphere Research Satellite) experiment. A review of the basic parameters involved in the data compilations; a summary of information on line parameter compilations already in existence; and a summary of current laboratory spectroscopy studies are used to assess the data base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, J.M.; Brock, M.L.; Garland, P.A.
1979-07-01
This bibliography, a compilation of 490 references, is the second in a series compiled from the National Uranium Resource Evaluation (NURE) Bibliographic Data Base. This data base is one of six data bases created by the Ecological Sciences Information Center, Oak Ridge National Laboratory, for the Grand Junction Office of the Department of Energy. Major emphasis for this volume has been placed on uranium geology, encompassing deposition, genesis of ore deposits, and ore controls; and prospecting techniques, including geochemistry and aerial reconnaissance. The following indexes are provided to aid the user in locating references of interest: author, geographic location, quadranglemore » name, geoformational feature, taxonomic name, and keyword.« less
High performance solid and liquid lubricants: An industrial guide
NASA Technical Reports Server (NTRS)
Mcmurtrey, Ernest L.
1987-01-01
This handbook is intended to provide a ready reference for many of the solid and liquid lubricants used in the space industry. Lubricants and lubricant properties are arranged systematically so that designers, engineers, and maintenance personnel can conveniently locate data needed for their work. This handbook is divided into two major parts (A and B). Part A is a compilation of solid lubricant suppliers information on chemical and physical property of data of more than 250 solid lubricants, bonded solid lubricants, dispersions, and composites. Part B is a compilation of chemical and physical property data of more than 250 liquid lubricants, greases, oils, compounds, and fluids. The listed materials cover a broad spectrum from manufacturing and ground support to hardware applications of spacecraft.
Nitrogen Source and Loading Data for EPA Estuary Data Mapper
Nitrogen source and loading data have been compiled and aggregated at the scale of estuaries and associated watersheds of the conterminous United States, using the spatial framework in EPA's Estuary Data Mapper (EDM) to provide system boundaries. Original sources of data include...
EPA Office of Water (OW): 2002 Impaired Waters Baseline NHDPlus Indexed Dataset
This dataset consists of geospatial and attribute data identifying the spatial extent of state-reported impaired waters (EPA's Integrated Reporting categories 4a, 4b, 4c and 5)* available in EPA's Reach Address Database (RAD) at the time of extraction. For the 2002 baseline reporting year, EPA compiled state-submitted GIS data to create a seamless and nationally consistent picture of the Nation's impaired waters for measuring progress. EPA's Assessment and TMDL Tracking and Implementation System (ATTAINS) is a national compilation of states' 303(d) listings and TMDL development information, spanning several years of tracking over 40,000 impaired waters.
Sulfite-containing Canadian pharmaceutical products available in 1991.
Miyata, M; Schuster, B; Schellenberg, R
1992-01-01
OBJECTIVE: To compile an inclusive list of Canadian pharmaceutical products available in 1991 that contained sulfites. DATA SOURCES: Written and oral responses from 94 pharmaceutical companies selected from the 1989 Compendium of Pharmaceuticals and Specialties. RESULTS: A list of sulfite-containing pharmaceutical products was compiled from data supplied by the 90 responding companies. Companies whose products contained no sulfites were separately identified. CONCLUSIONS: Sulfites are present in many pharmaceutical products and are one of many excipients and additives that have been reported to cause severe adverse reactions. The provided list should be a useful aid for health care practitioners when prescribing pharmaceutical products for sulfite-sensitive patients. PMID:1483237
VizieR Online Data Catalog: Carbon-enhanced metal-poor (CEMP) star abundances (Yoon+, 2016)
NASA Astrophysics Data System (ADS)
Yoon, J.; Beers, T. C.; Placco, V. M.; Rasmussen, K. C.; Carollo, D.; He, S.; Hansen, T. T.; Roederer, I. U.; Zeanah, J.
2017-03-01
We have endeavored to compile a list that is as complete as possible of carbon-enhanced metal-poor (CEMP); CEMP-s (and CEMP-r/s) and CEMP-no stars having [Fe/H]<-1.0 and [C/Fe]>=+0.7 with available high-resolution spectroscopic abundance information. We have only considered stars with claimed detections or lower limits for carbon, along with several critical elemental-abundance ratios, such as [Ba/Fe] and [Eu/Fe]. The great majority of our sample comes from the literature compilation of Placco+ (2014, J/ApJ/797/21). See section 2 for further details. (2 data files).
The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States
Horton, John D.; San Juan, Carma A.; Stoeser, Douglas B.
2017-06-30
The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (https://doi. org/10.5066/F7WH2N65) represents a seamless, spatial database of 48 State geologic maps that range from 1:50,000 to 1:1,000,000 scale. A national digital geologic map database is essential in interpreting other datasets that support numerous types of national-scale studies and assessments, such as those that provide geochemistry, remote sensing, or geophysical data. The SGMC is a compilation of the individual U.S. Geological Survey releases of the Preliminary Integrated Geologic Map Databases for the United States. The SGMC geodatabase also contains updated data for seven States and seven entirely new State geologic maps that have been added since the preliminary databases were published. Numerous errors have been corrected and enhancements added to the preliminary datasets using thorough quality assurance/quality control procedures. The SGMC is not a truly integrated geologic map database because geologic units have not been reconciled across State boundaries. However, the geologic data contained in each State geologic map have been standardized to allow spatial analyses of lithology, age, and stratigraphy at a national scale.
NASA Astrophysics Data System (ADS)
Angelitsa, Varvara; Loupasakis, Constantinos; Anagnwstopoulou, Christina
2015-04-01
Landslides, as a major type of geological hazard, represent one of the natural events that occur most frequently worldwide after hydro-meteorological events. Landslides occur when the stability of a slope changes due to a number of factors, such as the steep terrain and prolonged precipitation. Identification of landslides and compilation of landslide susceptibility, hazard and risk maps are very important issues for the public authorities providing substantial information regarding, the strategic planning and management of the land-use. Although landslides cannot be predicted accurately, many attempts have been made to compile these maps. Important factors for the the compilation of reliable maps are the quality and the amount of available data and the selection of the best method for the analysis. Numerous studies and publications providing landslide susceptibility,hazard and risk maps, for different regions of Greece, have completed up to now. Their common characteristic is that they are static, taking into account parameters like geology, mean annual precipitaion, slope, aspect, distance from roads, faults and drainage network, soil capability, land use etc., without introducing the dimension of time. The current study focuses on the Pelion Mountain, which is located at the southeastern part of Thessaly in Central Greece; aiming to compile "dynamic" susceptibility and hazard maps depending on climate changes. For this purpose, past and future precipipation data from regional climate models (RCMs) datasets are introduced as input parameters for the compilation of "dynamic" landslide hazard maps. Moreover, land motion mapping data produced by Persistent Scatterer Interferometry (PSI) are used for the validation of the landslide occurrence during the period from June 1992 to December 2003 and as a result for the calibration of the mapping procedure. The PSI data can be applied at a regional scale as support for land motion mapping and at local scale for the monitoring of single well-known ground motion event. The PSI data were produced within the framework of the Terrafirma project. Terrafirma is a pan- European ground motion information service focused on seismic risk, flood defense and costal lowland subsidence,inactive mines and hydrogeological risks. The produced maps provided substantial information for the land use planning and the civil protection of an area presenting excelent natural beauty and numerous preservable trtaditional villages. Keywords: landslide, psi technique, regional climate models, lanslide susceptibility maps, Greece
Kinetics and photochemistry Golden, D. M.
NASA Technical Reports Server (NTRS)
Demore, W. B.; Golden, R. F.; Howard, C. J.; Kurylo, M. J.; Margitan, J. J.; Molina, M. J.; Ravishankara, A. R.; Watson, R. T.; Hampson, R. F.
1985-01-01
The data for chemical kinetics rate constants and photochemical cross sections taken from a compilation prepared in early 1985, entitled Chemical Kinetics and Photochemical Data for Use in Stratospheric Modeling, is presented.
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
16 CFR 2.7 - Compulsory process in investigations.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., graphs, charts, photographs, sound recordings, images and other data or data compilations stored in any... other tangible things, for inspection, copying, testing, or sampling. (j) Manner and form of production...
16 CFR 2.7 - Compulsory process in investigations.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., graphs, charts, photographs, sound recordings, images and other data or data compilations stored in any... other tangible things, for inspection, copying, testing, or sampling. (j) Manner and form of production...
Thomas, Jonathan V.; Stanton, Gregory P.; Bumgarner, Johnathan R.; Pearson, Daniel K.; Teeple, Andrew; Houston, Natalie A.; Payne, Jason; Musgrove, MaryLynn
2013-01-01
Several previous studies have been done to compile or collect physical and chemical data, describe the hydrogeologic processes, and develop conceptual and numerical groundwater-flow models of the Edwards-Trinity aquifer in the Trans-Pecos region. Documented methods were used to compile and collect groundwater, surface-water, geochemical, geophysical, and geologic information that subsequently were used to develop this conceptual model.
Cables and connectors: A compilation
NASA Technical Reports Server (NTRS)
1974-01-01
A compilation is presented that reflects the uses, adaptation, and maintenance plus service, that are innovations derived from problem solutions in the space R and D programs, both in house and by NASA and AEC contractors. Data cover: (1) technology revelant to the employment of flat conductor cables and their adaptation to and within conventional systems, (2) connectors and various adaptations, and (3) maintenance and service technology, and shop hints useful in the installation and care of cables and connectors.
Lessons from India’s Counterinsurgency Campaign in Jammu and Kashmir
2013-12-10
counterproductive in reducing the overall violence in Kashmir. Figures compiled from data kept by the Indian Ministry of Home Affairs by Indian journalist Praveen ...force would only gain more support for the 48Schofield, Kashmir in Conflict, 148-150. 49The 1990 fatality figures compiled by Praveen Swami (India...India, Pakistan, and the Unending War. New York: I.B.Tauris & Co Ltd, 2003. Swami, Praveen . “Failed Threats and Flawed Fences: India’s Military
I-880 field experiment : analysis of incident data.
DOT National Transportation Integrated Search
1997-01-01
The I-880 field experiment has produced one of the largest data bases on incidents and freeway traffic-flow characteristics ever compiled. Field data on incidents were collected through observations of probe-vehicle drivers before and after the imple...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
... the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Stock Assessment Process is a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
... Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment Process is a stock...
Hinkle, Stephen R.; Shapiro, Stephanie D.; Plummer, Niel; Busenberg, Eurybiades; Widman, Peggy K.; Casile, Gerolamo C.; Wayland, Julian E.
2011-01-01
This report documents selected age data interpreted from measured concentrations of environmental tracers in groundwater from 1,399 National Water-Quality Assessment (NAWQA) Program groundwater sites across the United States. The tracers of interest were chlorofluorocarbons (CFCs), sulfur hexafluoride (SF6), and tritium/helium-3 (3H/3He). Tracer data compiled for this analysis primarily were from wells representing two types of NAWQA groundwater studies - Land-Use Studies (shallow wells, usually monitoring wells, in recharge areas under dominant land-use settings) and Major-Aquifer Studies (wells, usually domestic supply wells, in principal aquifers and representing the shallow, used resource). Reference wells (wells representing groundwater minimally impacted by anthropogenic activities) associated with Land-Use Studies also were included. Tracer samples were collected between 1992 and 2005, although two networks sampled from 2006 to 2007 were included because of network-specific needs. Tracer data from other NAWQA Program components (Flow System Studies, which are assessments of processes and trends along groundwater flow paths, and various topical studies) were not compiled herein. Tracer data from NAWQA Land-Use Studies and Major-Aquifer Studies that previously had been interpreted and published are compiled herein (as piston-flow ages), but have not been reinterpreted. Tracer data that previously had not been interpreted and published are evaluated using documented methods and compiled with aqueous concentrations, equivalent atmospheric concentrations (for CFCs and SF6), estimates of tracer-based piston-flow ages, and selected ancillary data, such as redox indicators, well construction, and major dissolved gases (N2, O2, Ar, CH4, and CO2). Tracer-based piston-flow ages documented in this report are simplistic representations of the tracer data. Tracer-based piston-flow ages are a convenient means of conceptualizing groundwater age. However, the piston-flow model is based on the potentially limiting assumptions that tracer transport is advective and that no mixing occurs. Additional uncertainties can arise from tracer degradation, sorption, contamination, or fractionation; terrigenic (natural) sources of tracers; spatially variable atmospheric tracer concentrations; and incomplete understanding of mechanisms of recharge or of the conditions under which atmospheric tracers were partitioned to recharge. The effects of some of these uncertainties are considered herein. For example, degradation, contamination, or fractionation often can be identified or inferred. However, detailed analysis of the effects of such uncertainties on the tracer-based piston-flow ages is constrained by sparse data and an absence of complementary lines of evidence, such as detailed solute transport simulations. Thus, the tracer-based piston-flow ages compiled in this report represent only an initial interpretation of the tracer data.
Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0
NASA Astrophysics Data System (ADS)
Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.
2010-12-01
The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA coastal grids and numerous grids contributed by the international science community are also integrated into the GMRT Synthesis. Finally, terrestrial elevation data from NASA’s ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) global DEM, and the USGS National Elevation Dataset have been included in the synthesis, providing resolution of up to 10 m in some areas of the US.
Data Collection Answers - SEER Registrars
Read clarifications to existing coding rules, which should be implemented immediately. Data collection experts from American College of Surgeons Commission on Cancer, CDC National Program of Cancer Registries, and SEER Program compiled these answers.
Analysis of Auroral Data from Nasa's 1968 and 1969 Airborne Auroral Expedition
NASA Technical Reports Server (NTRS)
1973-01-01
Results of a methodical compilation, reduction, and correlated analysis of spectrophotometric data obtained by various scientific groups during NASA's 1968 and 1969 Airborne Auroral Expedition are presented.
Materials engineering data base
NASA Technical Reports Server (NTRS)
1995-01-01
The various types of materials related data that exist at the NASA Marshall Space Flight Center and compiled into databases which could be accessed by all the NASA centers and by other contractors, are presented.
Heart Disease and Stroke Statistics
... or cases are combined. The incidence of a cardiovascular disease in the United States is estimated by multiplying ... accurate data available for assessing the impact of cardiovascular diseases and stroke. These data are compiled from death ...
2006 Oregon traffic crash summary
DOT National Transportation Integrated Search
2007-06-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
2007 Oregon traffic crash summary
DOT National Transportation Integrated Search
2008-07-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
17 CFR Appendix A to Part 145 - Compilation of Commission Records Available to the Public
Code of Federal Regulations, 2010 CFR
2010-04-01
... photographs. (10) Statistical data concerning the Commission's budget. (11) Statistical data concerning... applicant's legal status and governance structure, including governance fitness information, and any other...
New Mexico conservative ion water chemistry data and chalcedony geothermometry
Shari Kelley
2015-10-21
Compilation of boron, lithium, bromine, and silica data from wells and springs throughout New Mexico from a wide variety of sources. The chalcedony geothermometry calculation is included in this file.
2005 Oregon traffic crash summary
DOT National Transportation Integrated Search
2006-06-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring : on city streets, county roads and state highways. The data supports various local, county and state traffic : safety programs, engineering and ...
The national land use data program of the US Geological Survey
NASA Technical Reports Server (NTRS)
Anderson, J. R.; Witmer, R. E.
1975-01-01
The Land Use Data and Analysis (LUDA) Program which provides a systematic and comprehensive collection and analysis of land use and land cover data on a nationwide basis is described. Maps are compiled at about 1:125,000 scale showing present land use/cover at Level II of a land use/cover classification system developed by the U.S. Geological Survey in conjunction with other Federal and state agencies and other users. For each of the land use/cover maps produced at 1:125,000 scale, overlays are also compiled showing Federal land ownership, river basins and subbasins, counties, and census county subdivisions. The program utilizes the advanced technology of the Special Mapping Center of the U.S. Geological Survey, high altitude NASA photographs, aerial photographs acquired for the USGS Topographic Division's mapping program, and LANDSAT data in complementary ways.
Gartner, J.W.; Yost, B.T.
1988-01-01
Current meter data collected at 11 stations and water level data collected at one station in Suisun and San Pablo Bays, California, in 1986 are compiled in this report. Current-meter measurements include current speed and direction, and water temperature and salinity (computed from temperature and conductivity). For each of the 19 current-meter records, data are presented in two forms. These are: (1) results of harmonic analysis; and (2) plots of tidal current speed and direction versus time and plots of temperature and salinity versus time. Spatial distribution of the properties of tidal currents are given in graphic form. In addition, Eulerian residual currents have been compiled by using a vector-averaging technique. Water level data are presented in the form of a time-series plot and the results of harmonic analysis. (USGS)
Fortington, Lauren V; Finch, Caroline F
2016-01-01
Participation in Australian football (AF) has traditionally been male dominated and current understanding of injury and priorities for prevention are based solely on reports of injuries in male players. There is evidence in other sports that indicates that injury types differ between males and females. With increasing participation in AF by females, it is important to consider their specific injury and prevention needs. This study aimed to provide a first injury profile from existing sources for female AF. Compilation of injury data from four prospectively recorded data sets relating to female AF: (1) hospital admissions in Victoria, 2008/09-13/14, n=500 injuries; (2) emergency department (ED) presentations in Victoria, 2008/09-2012/13, n=1,879 injuries; (3) insurance claims across Australia 2004-2013, n=522 injuries; (4) West Australian Women's Football League (WAWFL), 2014 season club data, n=49 injuries. Descriptive results are presented as injury frequencies, injury types and injury to body parts. Hospital admissions and ED presentations were dominated by upper limb injuries, representing 47% and 51% of all injuries, respectively, primarily to the wrist/hand at 32% and 40%. Most (65%) insurance claim injuries involved the lower limb, 27% of which were for knee ligament damage. A high proportion of concussions (33%) were reported in the club-collected data. The results provide the first compilation of existing data sets of women's AF injuries and highlight the need for a rigorous and systematic injury surveillance system to be instituted.
On Fusing Recursive Traversals of K-d Trees
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rajbhandari, Samyam; Kim, Jinsung; Krishnamoorthy, Sriram
Loop fusion is a key program transformation for data locality optimization that is implemented in production compilers. But optimizing compilers currently cannot exploit fusion opportunities across a set of recursive tree traversal computations with producer-consumer relationships. In this paper, we develop a compile-time approach to dependence characterization and program transformation to enable fusion across recursively specified traversals over k-ary trees. We present the FuseT source-to-source code transformation framework to automatically generate fused composite recursive operators from an input program containing a sequence of primitive recursive operators. We use our framework to implement fused operators for MADNESS, Multiresolution Adaptive Numerical Environmentmore » for Scientific Simulation. We show that locality optimization through fusion can offer more than an order of magnitude performance improvement.« less
Mixed waste focus area alternative technologies workshop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borduin, L.C.; Palmer, B.A.; Pendergrass, J.A.
1995-05-24
This report documents the Mixed Waste Focus Area (MWFA)-sponsored Alternative Technology Workshop held in Salt Lake City, Utah, from January 24--27, 1995. The primary workshop goal was identifying potential applications for emerging technologies within the Options Analysis Team (OAT) ``wise`` configuration. Consistent with the scope of the OAT analysis, the review was limited to the Mixed Low-Level Waste (MLLW) fraction of DOE`s mixed waste inventory. The Los Alamos team prepared workshop materials (databases and compilations) to be used as bases for participant review and recommendations. These materials derived from the Mixed Waste Inventory Report (MWIR) data base (May 1994), themore » Draft Site Treatment Plan (DSTP) data base, and the OAT treatment facility configuration of December 7, 1994. In reviewing workshop results, the reader should note several caveats regarding data limitations. Link-up of the MWIR and DSTP data bases, while representing the most comprehensive array of mixed waste information available at the time of the workshop, requires additional data to completely characterize all waste streams. A number of changes in waste identification (new and redefined streams) occurred during the interval from compilation of the data base to compilation of the DSTP data base with the end result that precise identification of radiological and contaminant characteristics was not possible for these streams. To a degree, these shortcomings compromise the workshop results; however, the preponderance of waste data was linked adequately, and therefore, these analyses should provide useful insight into potential applications of alternative technologies to DOE MLLW treatment facilities.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment Process is a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
... Workshop. The product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment...
Urban Data Book : Volume 2. Urban Data - Milwaukee-Washington, Notes and Technical Appendixes
DOT National Transportation Integrated Search
1975-11-01
A quick reference compilation of certain population, socio-economic, employment, and modal split characteristics of the 35 largest Standard Metropolitan Statistical Areas (SMSA) in the United States is presented. The three basic groups of urban data ...
Comprehensive Data Collected from the Petroleum Refining Sector
On April 1, 2011 EPA sent a comprehensive industry-wide information collection request (ICR) to all facilities in the U.S. petroleum refining industry. EPA has received this ICR data and compiled these data into databases and spreadsheets for the web
Scientific data bases on a VAX-11/780 running VMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benkovitz, C.M.; Tichler, J.L.
At Brookhaven National Laboratory several current projects are developing and applying data management techniques to compile, analyze and distribute scientific data sets that are the result of various multi institutional experiments and data gathering projects. This paper will present an overview of a few of these data management projects.
NASA Technical Reports Server (NTRS)
Rosenfeld, Arie; Hinkle, C. Ross; Epstein, Marc
2002-01-01
This ST1 Technical Memorandum (TM) summarizes a two-month project on feral hog management in Merritt Island National Wildlife Refuge (MINWR). For this project, feral hogs were marked and recaptured, with the help of local trappers, to estimate population size and habitat preferences. Habitat covers included vegetation cover and Light Detection and Ranging (LIDAR) data for MINWR. In addition, an analysis was done of hunting records compiled by the Refuge and hog-car accidents compiled by KSC Security.
A Code Generation Approach for Auto-Vectorization in the Spade Compiler
NASA Astrophysics Data System (ADS)
Wang, Huayong; Andrade, Henrique; Gedik, Buğra; Wu, Kun-Lung
We describe an auto-vectorization approach for the Spade stream processing programming language, comprising two ideas. First, we provide support for vectors as a primitive data type. Second, we provide a C++ library with architecture-specific implementations of a large number of pre-vectorized operations as the means to support language extensions. We evaluate our approach with several stream processing operators, contrasting Spade's auto-vectorization with the native auto-vectorization provided by the GNU gcc and Intel icc compilers.
Piping and tubing technology: A compilation
NASA Technical Reports Server (NTRS)
1971-01-01
A compilation on the devices, techniques, and methods used in piping and tubing technology is presented. Data cover the following: (1) a number of fittings, couplings, and connectors that are useful in joining tubing and piping and various systems, (2) a family of devices used where flexibility and/or vibration damping are necessary, (3) a number of devices found useful in the regulation and control of fluid flow, and (4) shop hints to aid in maintenance and repair procedures such as cleaning, flaring, and swaging of tubes.
NASA Technical Reports Server (NTRS)
Brewer, D. A.; Remsberg, E. E.; Woodbury, G. E.; Quinn, L. C.
1979-01-01
Regional tropospheric air pollution modeling and data compilation to simulate the time variation of species concentrations in and around an urban area is discussed. The methods used to compile an emissions inventory are outlined. Emissions factors for vehicular travel in the urban area are presented along with an analysis of the emission gases. Emission sources other than vehicular including industrial wastes, residential solid waste disposal, aircraft emissions, and emissions from the railroads are investigated.
An Overview of the Production Quality Compiler-Compiler Project
1979-02-01
process. A parse tree is assumed, and there is a set of primitives for extracting information from it and for "walking" it: using its structure to...not adequate for, and even preclude, techniques that involve multiple phases, or non-trivial auxiliary data structures. In recent years there have...VALUE field of node 23: would indicate that the type of the value field was mtcger. As with "union mode" or "variant record" features in many
Hapke, Cheryl; Reid, David; Borrelli, Mark
2007-01-01
The U.S. Geological Survey has generated a comprehensive data clearinghouse of digital vector cliff edges and associated rates of cliff retreat along the open-ocean California coast. These data, which are presented herein, were compiled as part of the U.S. Geological Survey's National Assessment of Shoreline Change Project. Cliff erosion is a chronic problem along many coastlines of the United States. As coastal populations continue to grow and community infrastructures are threatened by erosion, there is increased demand for accurate information including rates and trends of coastal cliff retreat. There is also a critical need for these data to be consistent from one region to another. One objective of this work is to a develop standard, repeatable methodology for mapping and analyzing cliff edge retreat so that periodic, systematic, and internally consistent updates of cliff edge position and associated rates of erosion can be made at a national scale. This data compilation for open-ocean cliff edges for the California coast is a separate, yet related study to Hapke and others, 2006 documenting shoreline change along sandy shorelines of the California coast, which is itself one in a series that includes the Gulf of Mexico and the Southeast Atlantic coast (Morton and others, 2004; Morton and Miller, 2005). Future reports and data compilations will include coverage of the Northeast U.S., the Great Lakes, Hawaii and Alaska. Cliff edge change is determined by comparing the positions of one historical cliff edge digitized from maps with a modern cliff edge derived from topographic LIDAR (light detection and ranging) surveys. Historical cliff edges for the California coast represent the 1920s-1930s time-period; the most recent cliff edge was delineated using data collected between 1998 and 2002. End-point rate calculations were used to evaluate rates of erosion between the two cliff edges. Please refer to our full report on cliff edge erosion along the California coastline at http://pubs.usgs.gov/of/2007/1133/ for additional information regarding methods and results (Hapke and others, 2007). Data in this report are organized into downloadable layers by region (Northern, Central and Southern California) and are provided as vector datasets with accompanying metadata. Vector cliff edges may represent a compilation of data from one or more sources and the sources used are included in the dataset metadata. This project employs the Environmental Systems Research Institute's (ESRI) ArcGIS as it's Geographic Information System (GIS) mapping tool and contains several data layers (shapefiles) that are used to create a geographic view of the California coast. The vector data form a basemap comprising polygon and line themes that include a U.S. coastline (1:80,000), U.S. cities, and state boundaries.
Livermore Compiler Analysis Loop Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hornung, R. D.
2013-03-01
LCALS is designed to evaluate compiler optimizations and performance of a variety of loop kernels and loop traversal software constructs. Some of the loop kernels are pulled directly from "Livermore Loops Coded in C", developed at LLNL (see item 11 below for details of earlier code versions). The older suites were used to evaluate floating-point performances of hardware platforms prior to porting larger application codes. The LCALS suite is geared toward assissing C++ compiler optimizations and platform performance related to SIMD vectorization, OpenMP threading, and advanced C++ language features. LCALS contains 20 of 24 loop kernels from the older Livermoremore » Loop suites, plus various others representative of loops found in current production appkication codes at LLNL. The latter loops emphasize more diverse loop constructs and data access patterns than the others, such as multi-dimensional difference stencils. The loops are included in a configurable framework, which allows control of compilation, loop sampling for execution timing, which loops are run and their lengths. It generates timing statistics for analysis and comparing variants of individual loops. Also, it is easy to add loops to the suite as desired.« less
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
ZettaBricks: A Language Compiler and Runtime System for Anyscale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amarasinghe, Saman
This grant supported the ZettaBricks and OpenTuner projects. ZettaBricks is a new implicitly parallel language and compiler where defining multiple implementations of multiple algorithms to solve a problem is the natural way of programming. ZettaBricks makes algorithmic choice a first class construct of the language. Choices are provided in a way that also allows our compiler to tune at a finer granularity. The ZettaBricks compiler autotunes programs by making both fine-grained as well as algorithmic choices. Choices also include different automatic parallelization techniques, data distributions, algorithmic parameters, transformations, and blocking. Additionally, ZettaBricks introduces novel techniques to autotune algorithms for differentmore » convergence criteria. When choosing between various direct and iterative methods, the ZettaBricks compiler is able to tune a program in such a way that delivers near-optimal efficiency for any desired level of accuracy. The compiler has the flexibility of utilizing different convergence criteria for the various components within a single algorithm, providing the user with accuracy choice alongside algorithmic choice. OpenTuner is a generalization of the experience gained in building an autotuner for ZettaBricks. OpenTuner is a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests.« less
1987-04-30
AiBI 895 ADA (TRADENNANE) COMPILER VALIDATION SUMMARY REPORT / HARRIS CORPORATION HA (U) INFORMATION SYSTEMS AND TECHNOLOGY CENTER W-P AFS OH ADA...Compiler Validation Summary Report : 30 APR 1986 to 30 APR 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H1200 and H800 6...the United States Government (Ada Joint Program Office). Adae Compiler Validation mary Report : Compiler Name: HARRIS Ada Compiler, Version 1.0 1 Host
1986-06-28
Report : 28 JUN 1986 to 28 JUN 1987 Harris Corporation, HARRIS Ada Compiler, Version 1.0, Harris H700 and H60 6. PERFORMING ORG. REPORT ...CLASSIFICATION OF THIS PAGE (When Oata Entered) .. . • -- 7 1. -SUPPLEMENTARYNOTES Ada ® Compiler Validation Summary Report : Compiler Name: HARRIS Ada Compiler...AVF-VSR-43.1086 Ada® COMPILER VALIDATION SUMMARY REPORT : Harris Corporation HARRIS Ada Compiler, Version 1.0 Harris H700 and H60 Completion of
Li, Leon
2018-02-01
The data presented in this article are related to the research article entitled "Testing and comparing the performance of dynamic variance and correlation models in value-at-risk estimation. North American Journal of Economics and Finance, 40, 116-135. doi:10.1016/j.najef.2017.02.006 (Li, 2017) [1]. Data on daily stock index return for the Canadian, UK, and US equity markets, as compiled by Morgan Stanley Capital International, are provided in this paper. The country indices comprise at least 80% of the stock market capitalization of each country. The data cover the period from January 1, 1990, through September 8, 2016, and include 6963 observations. All stock prices are stated in dollars.
Lubrication handbook for the space industry. Part A: Solid lubricants. Part B: Liquid lubricants
NASA Technical Reports Server (NTRS)
Mcmurtrey, E. L.
1985-01-01
This handbook is intended to provide a ready reference for many of the solid and liquid lubricants used in the space industry. Lubricants and lubricant properties are arranged systematically so that designers, engineers, and maintenance personnel can conveniently locate data needed for their work. This handbook is divided into two major parts (A and B). Part A is a compilation of solid lubricant suppliers information on chemical and physical property of data of more than 250 solid lubricants, bonded solid lubricants, dispersions, and composites. Part B is a compilation of chemical and physical porperty data of more then 250 liquid lubricants, greases, oils, compounds, and fluids. The listed materials cover a broad spectrum from manufacturing and ground support to hardware applications of spacecraft.
Data on development of new energy technologies
NASA Astrophysics Data System (ADS)
1994-03-01
The paper compiles data on the trend of development of new energy technologies into a book. By category, renewable energy is solar energy, wind power generation, geothermal power generation, ocean energy, and biomass. As a category of fuel form conversion, cited are coal liquefaction/gasification, coal gasification combined cycle power generation, and natural gas liquefaction/decarbonization. The other categories are cogeneration by fuel cell and ceramic gas turbine, district heat supply system, power load leveling technology, transportation-use substitution-fuel vehicle, and others (Stirling engine, superconducting power generator, etc.). The data are systematically compiled on essential principles, transition of introduction, objectives of introduction, status of production, cost, development schedule, performance, etc. The paper also deals with the related legislation system, developmental organizations, and a menu for power companies' buying surplus power.
Chapter 3: Circum-Arctic mapping project: New magnetic and gravity anomaly maps of the Arctic
Gaina, C.; Werner, S.C.; Saltus, R.; Maus, S.; Aaro, S.; Damaske, D.; Forsberg, R.; Glebovsky, V.; Johnson, Kevin; Jonberger, J.; Koren, T.; Korhonen, J.; Litvinova, T.; Oakey, G.; Olesen, O.; Petrov, O.; Pilkington, M.; Rasmussen, T.; Schreckenberger, B.; Smelror, M.
2011-01-01
New Circum-Arctic maps of magnetic and gravity anomalies have been produced by merging regional gridded data. Satellite magnetic and gravity data were used for quality control of the long wavelengths of the new compilations. The new Circum-Arctic digital compilations of magnetic, gravity and some of their derivatives have been analyzed together with other freely available regional and global data and models in order to provide a consistent view of the tectonically complex Arctic basins and surrounding continents. Sharp, linear contrasts between deeply buried basement blocks with different magnetic properties and densities that can be identified on these maps can be used, together with other geological and geophysical information, to refine the tectonic boundaries of the Arctic domain. ?? 2011 The Geological Society of London.
Nesaraja, Caroline D.
2015-11-27
This paper presents available information pertaining to the nuclear structure of ground and excited states for all known nuclei with mass numbers A=24, which have been compiled and evaluated. The adopted level and decay schemes, as well as the detailed nuclear properties and configuration assignments based on experimental data are presented for these nuclides. When there are insufficient data, expected values from systematics of nuclear properties or/and theoretical calculations are quoted. Unexpected or discrepant experimental results are also noted. A summary and compilation of the discovery of various isotopes in this mass region is given in 2013Fr02 ( 241Np, 241Pu,more » 241Am, 241Cm, 241Bk, and 241Cf), 2011Me01 (241Es), and 2013Th02 ( 241Fm).« less
2003 Oregon traffic crash summary
DOT National Transportation Integrated Search
2004-10-01
The Crash Analysis and Reporting Unit compiles data for reported motor vehicle traffic crashes occurring on city streets, county roads and state highways. The data supports various local, county and state traffic safety programs, engineering and plan...
Census mapbook for transportation planning.
DOT National Transportation Integrated Search
1994-12-01
Geographic display of Census data in transportation planning and policy decisions are compiled in a report of 49 maps, depicting use of the data in applications such as travel demand model development and model validation, population forecasting, cor...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-05
... compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment... Series Using datasets recommended from the Data Workshop, Panelists will employ assessment models to...
24 CFR 26.19 - Request for production of documents.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., photographs, sound recordings, images, and other data or data compilations stored in any medium from which... the usual course of business or shall organize and label them to correspond with the categories in the...
24 CFR 26.19 - Request for production of documents.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., photographs, sound recordings, images, and other data or data compilations stored in any medium from which... the usual course of business or shall organize and label them to correspond with the categories in the...
24 CFR 26.19 - Request for production of documents.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., photographs, sound recordings, images, and other data or data compilations stored in any medium from which... the usual course of business or shall organize and label them to correspond with the categories in the...
Massachusetts Integrated Postsecondary Education Data System. Summary Report 1987.
ERIC Educational Resources Information Center
Dulac, Betty; Vasily, Jon
Data compiled from aggregate forms completed by individual Massachusetts colleges and universities as part of the Integrated Postsecondary Education Data System (IPEDS), formerly known as the Higher Education General Information Survey (HEGIS), are presented. Only data pertaining to those institutions authorized to grant degrees in Massachusetts…
Space station data system analysis/architecture study. Task 4: System definition report. Appendix
NASA Technical Reports Server (NTRS)
1985-01-01
Appendices to the systems definition study for the space station Data System are compiled. Supplemental information on external interface specification, simulation and modeling, and function design characteristics is presented along with data flow diagrams, a data dictionary, and function allocation matrices.
34 CFR 303.540 - Data collection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 2 2011-07-01 2010-07-01 true Data collection. 303.540 Section 303.540 Education... DISABILITIES State Administration Reporting Requirements § 303.540 Data collection. (a) Each system must include the procedures that the State uses to compile data on the statewide system. The procedures must...
34 CFR 303.540 - Data collection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Data collection. 303.540 Section 303.540 Education... DISABILITIES State Administration Reporting Requirements § 303.540 Data collection. (a) Each system must include the procedures that the State uses to compile data on the statewide system. The procedures must...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Seyong; Vetter, Jeffrey S
Computer architecture experts expect that non-volatile memory (NVM) hierarchies will play a more significant role in future systems including mobile, enterprise, and HPC architectures. With this expectation in mind, we present NVL-C: a novel programming system that facilitates the efficient and correct programming of NVM main memory systems. The NVL-C programming abstraction extends C with a small set of intuitive language features that target NVM main memory, and can be combined directly with traditional C memory model features for DRAM. We have designed these new features to enable compiler analyses and run-time checks that can improve performance and guard againstmore » a number of subtle programming errors, which, when left uncorrected, can corrupt NVM-stored data. Moreover, to enable recovery of data across application or system failures, these NVL-C features include a flexible directive for specifying NVM transactions. So that our implementation might be extended to other compiler front ends and languages, the majority of our compiler analyses are implemented in an extended version of LLVM's intermediate representation (LLVM IR). We evaluate NVL-C on a number of applications to show its flexibility, performance, and correctness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
Proving Correctness for Pointer Programs in a Verifying Compiler
NASA Technical Reports Server (NTRS)
Kulczycki, Gregory; Singh, Amrinder
2008-01-01
This research describes a component-based approach to proving the correctness of programs involving pointer behavior. The approach supports modular reasoning and is designed to be used within the larger context of a verifying compiler. The approach consists of two parts. When a system component requires the direct manipulation of pointer operations in its implementation, we implement it using a built-in component specifically designed to capture the functional and performance behavior of pointers. When a system component requires pointer behavior via a linked data structure, we ensure that the complexities of the pointer operations are encapsulated within the data structure and are hidden to the client component. In this way, programs that rely on pointers can be verified modularly, without requiring special rules for pointers. The ultimate objective of a verifying compiler is to prove-with as little human intervention as possible-that proposed program code is correct with respect to a full behavioral specification. Full verification for software is especially important for an agency like NASA that is routinely involved in the development of mission critical systems.
Review of food composition data for edible insects.
Nowak, Verena; Persijn, Diedelinde; Rittenschober, Doris; Charrondiere, U Ruth
2016-02-15
Edible insects are considered rich in protein and a variety of micronutrients, and are therefore seen as potential contributors to food security. However, the estimation of the insects' contribution to the nutrient intake is limited since data are absent in food composition tables and databases. Therefore, FAO/INFOODS collected and published analytical data from primary sources with sufficient quality in the Food Composition Database for Biodiversity (BioFoodComp). Data were compiled for 456 food entries on insects in different developmental stages. A total of 5734 data points were entered, most on minerals and trace elements (34.8%), proximates (24.5%), amino acids (15.3%) and (pro)vitamins (9.1%). Data analysis of Tenebrio molitor confirms its nutritive quality that can help to combat malnutrition. The collection of data will assist compilers to incorporate more insects into tables and databases, and to further improve nutrient intake estimations. Copyright © 2015 Food and Agriculture Organization of the United Nations. Published by Elsevier Ltd.. All rights reserved.
Brown, W. M.; Nowlin, J.O.; Smith, L.H.; Flint, M.R.
1986-01-01
A study of the Truckee and Carson Rivers was begun in October 1978 to assess the cause and effect relations between human and natural actions, and the quality of water at different times and places along the rivers. This report deals with the compilation of basic hydrologic data and the presentation of some of the new data collected during the study. Topographic, flow, and chemical data, data from recent time-of-travel studies, and new data on river mileages and drainage areas that were determined using new , high-resolution maps, are included. The report is a guide to locating maps, aerial photographs, computer files, and reports that relate to the rivers and their basins. It describes methods for compiling and expressing hydrologic information for ease of reading and understanding by the many users of water-related data. Text, tabular data, and colored plates with detailed maps and hydrographs are extensively cross referenced. (USGS)
CIL: Compiler Implementation Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gries, David
1969-03-01
This report is a manual for the proposed Compiler Implementation Language, CIL. It is not an expository paper on the subject of compiler writing or compiler-compilers. The language definition may change as work progresses on the project. It is designed for writing compilers for the IBM 360 computers.
Geological and geochemical aspects of uranium deposits: a selected, annotated bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, J.M.; Brock, M.L.; Garland, P.A.
1978-06-01
A compilation of 490 references is presented which is the second in a series compiled from the National Uranium Resource Evaluation (NURE) Bibliographic Data Base. This data base is one of six created by the Ecological Sciences Information Center, Oak Ridge National Laboratory, for the Grand Junction Office of the Department of Energy. Major emphasis for this volume has been placed on uranium geology, encompassing deposition, genesis of ore deposits, and ore controls; and prospecting techniques, including geochemistry and aerial reconnaissance. The following indexes are provided to aid the user in locating references of interest: author, geographic location, quadrangel name,more » geoformational feature, taxonomic name, and keyword.« less
OEM Emergency Preparedness Information
The Office of Emergency Management compiles a wide variety of information in support of Emergency Preparedness, including certain elements of the System for Risk Management Plans (SRMP), a wide variety of training and guidance materials, inventories and readiness/O&M status of equipment and response personnel. Some of the data available to EPA for this emergency preparedness includes industry trade secret information.A major component of this data asset is information compiled in the Compendium of Environmental Testing Laboratories. This information allows OEM to direct samples recovered from emergency incidents to the appropriate laboratory certified to analyze the substances in question.Also included here are all types of field readiness information, training logs, and personnel contact information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, J.M.; Garland, P.A.; White, M.B.
This bibliography, a compilation of 474 references, is the fourth in a series compiled from the National Uranium Resource Evaluation (NURE) Bibliographic Data Base. This data base was created for the Grand Junction Office of the Department of Energy's National Uranium Resource Evaluation Project by the Ecological Sciences Information Center, Oak Ridge National Laboratory. The references in the bibliography are arranged by subject category: (1) geochemistry, (2) exploration, (3) mineralogy, (4) genesis of deposits, (5) geology of deposits, (6) uranium industry, (7) geology of potential uranium-bearing areas, and (8) reserves and resources. The references are indexed by author, geographic location,more » quadrangle name, geoformational feature, and keyword.« less
This presentation is a compilation of harmful algal bloom (HAB) related field monitoring data from the 2015 bloom season, treatment plant monitoring data from the 2013 and 2014 bloom seasons, and bench-scale treatment study data from 2015.
A Geophysical Atlas for Interpretation of Satellite-derived Data
NASA Technical Reports Server (NTRS)
Lowman, P. D., Jr. (Editor); Frey, H. V. (Editor); Davis, W. M.; Greenberg, A. P.; Hutchinson, M. K.; Langel, R. A.; Lowrey, B. E.; Marsh, J. G.; Mead, G. D.; Okeefe, J. A.
1979-01-01
A compilation of maps of global geophysical and geological data plotted on a common scale and projection is presented. The maps include satellite gravity, magnetic, seismic, volcanic, tectonic activity, and mantle velocity anomaly data. The Bibliographic references for all maps are included.
Neutron Data Compilation Centre, European Nuclear Energy Agency, Newsletter No. 8 Bulletin
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1969-03-15
An index to the BNL/CCDN computerized file on neutron data was made and published as CCDN-NW/4. This publication is a new presentation of this index, which describes the content of the data file at March 1969.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
The Nippon Foundation / GEBCO Indian Ocean Bathymetric Compilation Project
NASA Astrophysics Data System (ADS)
Wigley, R. A.; Hassan, N.; Chowdhury, M. Z.; Ranaweera, R.; Sy, X. L.; Runghen, H.; Arndt, J. E.
2014-12-01
The Indian Ocean Bathymetric Compilation (IOBC) project, undertaken by Nippon Foundation / GEBCO Scholars, is focused on building a regional bathymetric data compilation, of all publically-available bathymetric data within the Indian Ocean region from 30°N to 60° S and 10° to 147° E. One of the objectives of this project is the creation of a network of Nippon Foundation / GEBCO Scholars working together, derived from the thirty Scholars from fourteen nations bordering on the Indian Ocean who have graduated from this Postgraduate Certificate in Ocean Bathymetry (PCOB) training program training program at the University of New Hampshire. The IOBC project has provided students a working example during their course work and has been used as basis for student projects during their visits to another Laboratory at the end of their academic year. This multi-national, multi-disciplinary project team will continue to build on the skills gained during the PCOB program through additional training. The IOBC is being built using the methodology developed for the International Bathymetric Chart of the Southern Ocean (IBCSO) compilation (Arndt et al., 2013). This skill was transferred, through training workshops, to further support the ongoing development within the scholar's network. This capacity-building project is envisioned to connect other personnel from within all of the participating nations and organizations, resulting in additional capacity-building in this field of multi-resolution bathymetric grid generation in their home communities. An updated regional bathymetric map and grids of the Indian Ocean will be an invaluable tool for all fields of marine scientific research and resource management. In addition, it has implications for increased public safety by offering the best and most up-to-date depth data for modeling regional-scale oceanographic processes such as tsunami-wave propagation behavior amongst others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cashman, Frances H.; Kulkarni, Varsha P.; Kisielius, Romas
2017-05-01
Measurements of element abundances in galaxies from astrophysical spectroscopy depend sensitively on the atomic data used. With the goal of making the latest atomic data accessible to the community, we present a compilation of selected atomic data for resonant absorption lines at wavelengths longward of 911.753 Å (the H i Lyman limit), for key heavy elements (heavier than atomic number 5) of astrophysical interest. In particular, we focus on the transitions of those ions that have been observed in the Milky Way interstellar medium (ISM), the circumgalactic medium (CGM) of the Milky Way and/or other galaxies, and the intergalactic mediummore » (IGM). We provide wavelengths, oscillator strengths, associated accuracy grades, and references to the oscillator strength determinations. We also attempt to compare and assess the recent oscillator strength determinations. For about 22% of the lines that have updated oscillator strength values, the differences between the former values and the updated ones are ≳0.1 dex. Our compilation will be a useful resource for absorption line studies of the ISM, as well as studies of the CGM and IGM traced by sight lines to quasars and gamma-ray bursts. Studies (including those enabled by future generations of extremely large telescopes) of absorption by galaxies against the light of background galaxies will also benefit from our compilation.« less
The Health Impact Assessment (HIA) Resource and Tool ...
Health Impact Assessment (HIA) is a relatively new and rapidly emerging field in the U.S. An inventory of available HIA resources and tools was conducted, with a primary focus on resources developed in the U.S. The resources and tools available to HIA practitioners in the conduct of their work were identified through multiple methods and compiled into a comprehensive list. The compilation includes tools and resources related to the HIA process itself and those that can be used to collect and analyze data, establish a baseline profile, assess potential health impacts, and establish benchmarks and indicators for monitoring and evaluation. These resources include literature and evidence bases, data and statistics, guidelines, benchmarks, decision and economic analysis tools, scientific models, methods, frameworks, indices, mapping, and various data collection tools. Understanding the data, tools, models, methods, and other resources available to perform HIAs will help to advance the HIA community of practice in the U.S., improve the quality and rigor of assessments upon which stakeholder and policy decisions are based, and potentially improve the overall effectiveness of HIA to promote healthy and sustainable communities. The Health Impact Assessment (HIA) Resource and Tool Compilation is a comprehensive list of resources and tools that can be utilized by HIA practitioners with all levels of HIA experience to guide them throughout the HIA process. The HIA Resource
Thermodynamic data for biomass conversion and waste incineration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domalski, E.S.; Jobe, T.L. Jr; Milne, T.A.
1986-09-01
The general purpose of this collection of thermodynamic data of selected materials is to make property information available to the engineering community on chemical mixtures, polymers, composite materials, solid wastes, biomass, and materials not easily identifiable by a single stoichiometric formula. More than 700 materials have been compiled covering properties such as specific heat, gross heat of combustion, heat of fusion, heat of vaporization, and vapor pressure. The information was obtained from the master files of the NBS Chemical Thermodynamics Data Center, the annual issues of the Bulletin of Chemical Thermodynamics, intermittent examinations of the Chemical Abstracts subject indexes, individualmore » articles by various authors, and other general reference sources. The compilation is organized into several broad categories; materials are listed alphabetically within each category. For each material, the physical state, information as to the composition or character of the material, the kind of thermodynamic property reported, the specific property values for the material, and citations to the reference list are given. In addition, appendix A gives an empirical formula that allows heats of combustion of carbonaceous materials to be predicted with surprising accuracy when the elemental composition is known. A spread sheet illustrates this predictability with examples from this report and elsewhere. Appendix B lists some reports containing heats of combustion not included in this publication. Appendix C contains symbols, units, conversion factors, and atomic weights used in evaluating and compiling the thermodynamic data.« less
Portuguese food composition database quality management system.
Oliveira, L M; Castanheira, I P; Dantas, M A; Porto, A A; Calhau, M A
2010-11-01
The harmonisation of food composition databases (FCDB) has been a recognised need among users, producers and stakeholders of food composition data (FCD). To reach harmonisation of FCDBs among the national compiler partners, the European Food Information Resource (EuroFIR) Network of Excellence set up a series of guidelines and quality requirements, together with recommendations to implement quality management systems (QMS) in FCDBs. The Portuguese National Institute of Health (INSA) is the national FCDB compiler in Portugal and is also a EuroFIR partner. INSA's QMS complies with ISO/IEC (International Organization for Standardisation/International Electrotechnical Commission) 17025 requirements. The purpose of this work is to report on the strategy used and progress made for extending INSA's QMS to the Portuguese FCDB in alignment with EuroFIR guidelines. A stepwise approach was used to extend INSA's QMS to the Portuguese FCDB. The approach included selection of reference standards and guides and the collection of relevant quality documents directly or indirectly related to the compilation process; selection of the adequate quality requirements; assessment of adequacy and level of requirement implementation in the current INSA's QMS; implementation of the selected requirements; and EuroFIR's preassessment 'pilot' auditing. The strategy used to design and implement the extension of INSA's QMS to the Portuguese FCDB is reported in this paper. The QMS elements have been established by consensus. ISO/IEC 17025 management requirements (except 4.5) and 5.2 technical requirements, as well as all EuroFIR requirements (including technical guidelines, FCD compilation flowchart and standard operating procedures), have been selected for implementation. The results indicate that the quality management requirements of ISO/IEC 17025 in place in INSA fit the needs for document control, audits, contract review, non-conformity work and corrective actions, and users' (customers') comments, complaints and satisfaction, with minor adaptation. Implementation of the FCDB QMS proved to be a way of reducing the subjectivity of the compilation process and fully documenting it, and also facilitates training of new compilers. Furthermore, it has strengthened cooperation and trust among FCDB actors, as all of them were called to be involved in the process. On the basis of our practical results, we can conclude that ISO/IEC 17025 management requirements are an adequate reference for the implementation of INSA's FCDB QMS with the advantages of being well known to all members of staff and also being a common quality language among laboratories producing FCD. Combining quality systems and food composition activities endows the FCDB compilation process with flexibility, consistency and transparency, and facilitates its monitoring and assessment, providing the basis for strengthening confidence among users, data producers and compilers.
NASA Astrophysics Data System (ADS)
Kisimoto, Kiyoyuki; Tani, Shin; Iizasa, Kokichi; Ishida, Mizuho
2010-05-01
Japanese ECS submission made in 2008 to the CLCS is heavily based on the swath bathymetric data. Japan Coast Guard and other seagoing institutions in Japan have been intensively engaged in swath mapping at and around Japanese waters for more than 25 years. As a result of intensive survey activities for the ECS submission over the past several years, many geological and geophysical data in the region have been also accumulated and compiled. Among those bathymetric data are most fundamental and basic in all earth sciences. Geologically Japan is located at very active place on earth, i.e. tectonically active zone. To better understand and visualize the tectonic processes around Japan, newly compiled bathymetric data have been combined with geological and geophysical data in three dimensional images, or dioramas of tectonic processes. Japan is a place of beautiful showcase of tectonic phenomena, such as subduction, collision, eruption, earthquake and so on. Different types of subductions are recognized not only from the seismicity but also are manifested by detailed topography. Marine geology maps should be reinterpreted and revised with new bathymetric data. Gravity anomaly data are recalculated as a new DEM becomes available. Our poster will visualize the greatly enhanced quality of the DEM of Japan. Specification of the DEM of Japan we used for the presentation: Datum: WGS84 Land Area: STRM3 Wet Area (deep sea): Quality controlled (selection of good navigation data and removal of bad/loose pings) then gridded into more than one size of spatial resolution for users' convenience sake. Wet Area (void, or area with no swath data): Filled with ETOPO2 (version2). Wet Area (coastal to shallow): Conventional method, or manual editing by experts.
NASA Technical Reports Server (NTRS)
Kemp, N. D.
1983-01-01
Engineers evaluating Space Shuttle flight data and performance results are using a massive data base of wind tunnel test data. A wind tunnel test data base of the magnitude attained is a major accomplishment. The Apollo program spawned an automated wind tunnel data analysis system called SADSAC developed by the Chrysler Space Division. An improved version of this system renamed DATAMAN was used by Chrysler to document analyzed wind tunnel data and data bank the test data in standardized formats. These analysis documents, associated computer graphics and standard formatted data were disseminated nationwide to the Shuttle technical community. These outputs became the basis for substantiating and certifying the flight worthiness of the Space Shuttle and for improving future designs. As an aid to future programs this paper documents the lessons learned in compiling the massive wind tunnel test data base for developing the Space Shuttle. In particular, innovative managerial and technical concepts evolved in the course of conceiving and developing this successful DATAMAN system and the methods and organization for applying the system are presented.
VizieR Online Data Catalog: Variable stars in Leo I dSph (Stetson+, 2014)
NASA Astrophysics Data System (ADS)
Stetson, P. B.; Fiorentino, G.; Bono, G.; Bernard, E. J.; Monelli, M.; Iannicola, G.; Gallart, C.; Ferraro, I.
2014-11-01
The observational material for this study consists of 1884 individual CCD images obtained on 48 nights during 32 observing runs. These data are contained within a much larger data collection (~400000 images, ~500 observing runs) compiled and maintained by the first author. (5 data files).
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-28
... Workshop. The product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment... state and federal agencies. SEDAR 26 Assessment webinars: Using datasets recommended from the Data...
16 CFR § 1702.8 - Human experience data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Human experience data. § 1702.8 Section Â... AND REQUIREMENTS § 1702.8 Human experience data. Human experience data constitutes the primary... compilation of all reasonably available reports pertaining to human use of the particular substance, including...
5 CFR 890.1307 - Data collection.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Data collection. 890.1307 Section 890... Program Demonstration Project § 890.1307 Data collection. Each carrier will compile, maintain, and when requested by OPM or DoD, report data on its plan's experience necessary to produce reports containing the...
5 CFR 890.1307 - Data collection.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Data collection. 890.1307 Section 890... Program Demonstration Project § 890.1307 Data collection. Each carrier will compile, maintain, and when requested by OPM or DoD, report data on its plan's experience necessary to produce reports containing the...
34 CFR 303.124 - Data collection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 34 Education 2 2012-07-01 2012-07-01 false Data collection. 303.124 Section 303.124 Education... Statewide System § 303.124 Data collection. (a) Each statewide system must include a system for compiling and reporting timely and accurate data that meets the requirements in paragraph (b) of this section...
5 CFR 890.1307 - Data collection.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Data collection. 890.1307 Section 890... Program Demonstration Project § 890.1307 Data collection. Each carrier will compile, maintain, and when requested by OPM or DoD, report data on its plan's experience necessary to produce reports containing the...
34 CFR 303.124 - Data collection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 34 Education 2 2013-07-01 2013-07-01 false Data collection. 303.124 Section 303.124 Education... Statewide System § 303.124 Data collection. (a) Each statewide system must include a system for compiling and reporting timely and accurate data that meets the requirements in paragraph (b) of this section...
34 CFR 303.124 - Data collection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 34 Education 2 2014-07-01 2013-07-01 true Data collection. 303.124 Section 303.124 Education... Statewide System § 303.124 Data collection. (a) Each statewide system must include a system for compiling and reporting timely and accurate data that meets the requirements in paragraph (b) of this section...
5 CFR 890.1307 - Data collection.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Data collection. 890.1307 Section 890... Program Demonstration Project § 890.1307 Data collection. Each carrier will compile, maintain, and when requested by OPM or DoD, report data on its plan's experience necessary to produce reports containing the...
12 CFR 1805.804 - Data collection and reporting.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 7 2010-01-01 2010-01-01 false Data collection and reporting. 1805.804 Section... Data collection and reporting. (a) Data—General. An Awardee (and a Community Partner, if appropriate.... An Awardee (and a Community Partner, if appropriate) shall compile such data on the gender, race...
Finglas, Paul M.; Berry, Rachel; Astley, Siân
2014-01-01
Food composition databases (FCDBs) form an integral part of nutrition and health research, patient treatment, manufacturing processes, and consumer information. FCDBs have traditionally been compiled at a national level; therefore, until recently, there was limited standardization of procedures across different data sets. Digital technologies now allow FCDB users to access a variety of information from different sources, which has emphasized the need for greater harmonization. The European Food Information Resource (EuroFIR) Network of Excellence and Nexus projects (2005–2013) has been instrumental in addressing differences in FCDBs and in producing standardized protocols and quality schemes to compile and manage them. A formal, recognized European standard for food composition data has been prepared, which will further assist in the production of comparable data. Quality schemes need to address both the composition data, plus the methods of sampling, analysis, and calculation, and the documentation of processes. The EuroFIR data exchange platform provides a wealth of resources for composition compilers and end users and continues to develop new and innovative tools and methodologies. EuroFIR also is working in collaboration with the European Food Safety Authority, and as a partner in several European projects. Through such collaborations, EuroFIR will continue to develop FCDB harmonization and to use new technologies to ensure sustainable future initiatives in the food composition activities that underpin food and health research in Europe. PMID:25469406
Technical Data and Reports on Nitrogen Dioxide Measurements and SIP Status
EPA collects data from the states and regions on their air quality and state implementation plan (SIP) progress. This information is compiled in a database, and used to create reports, trend charts, and maps.
Technical Data and Reports on Carbon Monoxide Measurements and SIP Status
EPA collects data from the states and regions on their air quality and state implementation plan (SIP) progress. This information is compiled in a database, and used to create reports, trend charts, and maps.
76 FR 76730 - Proposed Agency Information Collection Activities; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-08
..., going forward, the Federal Reserve would modify the FR 2644 instructions as needed to maintain... corporations. Data from this reporting form comprise a piece of the flow of funds data that are compiled by the...
2009 Oregon traffic crash summary
DOT National Transportation Integrated Search
2010-09-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
Fallon, Nevada FORGE Analogue Outcrop Samples
Blankenship, Doug; Bauer, Steve J.; Barrow, P.; Robbins, A.; Hileman, M.
2018-03-12
Compilation of results for mechanical and fluid flow properties of analogue outcrop samples - experimental data for compressional and shear wave velocities, tensile strengths, and compressive strengths. Outcrop location and sample orientation data are documented in a separate csv file.
2008 Oregon traffic crash summary
DOT National Transportation Integrated Search
2009-09-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
2010 Oregon traffic crash summary
DOT National Transportation Integrated Search
2011-08-01
The Crash Analysis and Reporting Unit compiles data and publishes statistics for reported motor vehicle : traffic crashes per ORS 802.050(2) and 802.220(6). The data supports various local, county and state : traffic safety programs, engineering and ...
Elliott, Peggy E.; Moreo, Michael T.
2011-01-01
From 1951 to 2008, groundwater withdrawals totaled more than 25,000 million gallons from wells on and directly adjacent to the Nevada National Security Site. Total annual groundwater withdrawals ranged from about 30 million gallons in 1951 to as much as 1,100 million gallons in 1989. Annual withdrawals from individual wells ranged from 0 million gallons to more than 325 million gallons. Monthly withdrawal data for the wells were compiled in a Microsoft(copyright) Excel 2003 spreadsheet. Groundwater withdrawal data are a compilation of measured and estimated withdrawals obtained from published and unpublished reports, U.S. Geological Survey files, and/or data reported by other agencies. The withdrawal data were collected from 42 wells completed in 33 boreholes. A history of each well is presented in terms of its well construction, borehole lithology, withdrawals, and water levels.
ALCHEMIST (Anesthesia Log, Charge Entry, Medical Information, and Statistics)
Covey, M. Carl
1979-01-01
This paper presents an automated system for the handling of charges and information processing within the Anesthesiology department of the University of Arkansas for the Medical Sciences (UAMS). The purpose of the system is to take the place of cumbersome, manual billing procedures and in the process of automated charge generation, to compile a data base of patient data for later use. ALCHEMIST has demonstrated its value by increasing both the speed and the accuracy of generation of patient charges as well as facilitating the compilation of valuable, informative reports containing statistical summaries of all aspects of the UAMS operating wing case load. ALCHEMIST allows for the entry of fifty different sets of information (multiple items in some sets) for a total of 107 separate data elements from the original anesthetic record. All this data is entered as part of the charge entry procedure.
Reconnaissance geologic map of Kodiak Island and adjacent islands, Alaska
Wilson, Frederic H.
2013-01-01
Kodiak Island and its adjacent islands, located on the west side of the Gulf of Alaska, contain one of the largest areas of exposure of the flysch and melange of the Chugach terrane of southern Alaska. However, in the past 25 years, only detailed mapping covering small areas in the archipelago has been done. This map and its associated digital files (Wilson and others, 2005) present the best available mapping compiled in an integrated fashion. The map and associated digital files represent part of a systematic effort to release geologic map data for the United States in a uniform manner. The geologic data have been compiled from a wide variety of sources, ranging from state and regional geologic maps to large-scale field mapping. The map data are presented for use at a nominal scale of 1:500,000, although individual datasets (see Wilson and others, 2005) may contain data suitable for use at larger scales.
Buttenfield, B.P.; Stanislawski, L.V.; Brewer, C.A.
2011-01-01
This paper reports on generalization and data modeling to create reduced scale versions of the National Hydrographic Dataset (NHD) for dissemination through The National Map, the primary data delivery portal for USGS. Our approach distinguishes local differences in physiographic factors, to demonstrate that knowledge about varying terrain (mountainous, hilly or flat) and varying climate (dry or humid) can support decisions about algorithms, parameters, and processing sequences to create generalized, smaller scale data versions which preserve distinct hydrographic patterns in these regions. We work with multiple subbasins of the NHD that provide a range of terrain and climate characteristics. Specifically tailored generalization sequences are used to create simplified versions of the high resolution data, which was compiled for 1:24,000 scale mapping. Results are evaluated cartographically and metrically against a medium resolution benchmark version compiled for 1:100,000, developing coefficients of linear and areal correspondence.
NASA Astrophysics Data System (ADS)
Caballero, J. A.; Montes, D.; Alonso-Floriano, F. J.; Cortés-Contreras, M.; González-Álvarez, E.; Hidalgo, D.; Holgado, G.; Martínez-Rodríguez, H.; Sanz-Forcada, J.; López-Santiago, J.
2015-05-01
We are compiling the most comprehensive database of M dwarfs ever built, CARMENCITA, the CARMENES Cool dwarf Information and daTa Archive, which will be the CARMENES 'input catalogue'. In addition to the science preparation with low- and high-resolution spectrographs and lucky imagers, we compile a huge pile of public data on over 2200 M dwarfs, and analyse them, mostly using virtual-observatory tools. Here we describe four specific actions carried out by master students. They mine public archives for additional high-resolution spectroscopy (UVES, FEROS and HARPS), multi-band photometry (FUV-NUV-u-B-g-V-r-R-i-J-H-Ks-W1-W2-W3-W4), X-ray data (ROSAT, XMM-Newton and Chandra), and periods, rotational velocities and Hα pseudo-equivalent widths. As described, there are many interdependences between all these data.
Benchmark Dataset for Whole Genome Sequence Compression.
C L, Biji; S Nair, Achuthsankar
2017-01-01
The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
Compiler-Directed File Layout Optimization for Hierarchical Storage Systems
Ding, Wei; Zhang, Yuanrui; Kandemir, Mahmut; ...
2013-01-01
File layout of array data is a critical factor that effects the behavior of storage caches, and has so far taken not much attention in the context of hierarchical storage systems. The main contribution of this paper is a compiler-driven file layout optimization scheme for hierarchical storage caches. This approach, fully automated within an optimizing compiler, analyzes a multi-threaded application code and determines a file layout for each disk-resident array referenced by the code, such that the performance of the target storage cache hierarchy is maximized. We tested our approach using 16 I/O intensive application programs and compared its performancemore » against two previously proposed approaches under different cache space management schemes. Our experimental results show that the proposed approach improves the execution time of these parallel applications by 23.7% on average.« less
Intrusive dike complexes, cumulate cores, and the extrusive growth of Hawaiian volcanoes
Flinders, Ashton F.; Ito, Garrett; Garcia, Michael O.; Sinton, John M.; Kauahikaua, Jim; Taylor, Brian
2013-01-01
The Hawaiian Islands are the most geologically studied hot-spot islands in the world yet surprisingly, the only large-scale compilation of marine and land gravity data is more than 45 years old. Early surveys served as reconnaissance studies only, and detailed analyses of the crustal-density structure have been limited. Here we present a new chain-wide gravity compilation that incorporates historical island surveys, recently published work on the islands of Hawai‘i, Kaua‘i, and Ni‘ihau, and >122,000 km of newly compiled marine gravity data. Positive residual gravity anomalies reflect dense intrusive bodies, allowing us to locate current and former volcanic centers, major rift zones, and a previously suggested volcano on Ka‘ena Ridge. By inverting the residual gravity data, we generate a 3-D view of the dense, intrusive complexes and olivine-rich cumulate cores within individual volcanoes and rift zones. We find that the Hāna and Ka‘ena ridges are underlain by particularly high-density intrusive material (>2.85 g/cm3) not observed beneath other Hawaiian rift zones. Contrary to previous estimates, volcanoes along the chain are shown to be composed of a small proportion of intrusive material (<30% by volume), implying that the islands are predominately built extrusively.
ProteoWizard: open source software for rapid proteomics tools development.
Kessner, Darren; Chambers, Matt; Burke, Robert; Agus, David; Mallick, Parag
2008-11-01
The ProteoWizard software project provides a modular and extensible set of open-source, cross-platform tools and libraries. The tools perform proteomics data analyses; the libraries enable rapid tool creation by providing a robust, pluggable development framework that simplifies and unifies data file access, and performs standard proteomics and LCMS dataset computations. The library contains readers and writers of the mzML data format, which has been written using modern C++ techniques and design principles and supports a variety of platforms with native compilers. The software has been specifically released under the Apache v2 license to ensure it can be used in both academic and commercial projects. In addition to the library, we also introduce a rapidly growing set of companion tools whose implementation helps to illustrate the simplicity of developing applications on top of the ProteoWizard library. Cross-platform software that compiles using native compilers (i.e. GCC on Linux, MSVC on Windows and XCode on OSX) is available for download free of charge, at http://proteowizard.sourceforge.net. This website also provides code examples, and documentation. It is our hope the ProteoWizard project will become a standard platform for proteomics development; consequently, code use, contribution and further development are strongly encouraged.
User guide for MODPATH version 6 - A particle-tracking model for MODFLOW
Pollock, David W.
2012-01-01
MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.
The new Inventory of Italian Glaciers: Present knowledge, applied methods and preliminary results
NASA Astrophysics Data System (ADS)
Smiraglia, Claudio; Diolaiuti, Guglielmina; D'Agata, Carlo; Maragno, Davide; Baroni, Carlo; Mortara, Gianni; Perotti, Luigi; Bondesan, Aldino; Salvatore, Cristina; Vagliasindi, Marco; Vuillermoz, Elisa
2013-04-01
A new Glacier Inventory is an indispensable requirement in Italy due to the importance of evaluating the present glacier coverage and the recent changes driven by climate. Furthermore Alpine glaciers represent a not negligible water and touristic resource then to manage and promote them is needed to know their distribution, size and features. The first Italian Glacier Inventory dates back to 1959-1962. It was compiled by the Italian Glaciological Committee (CGI) in cooperation with the National Research Council (CNR); this first inventory was mainly based on field data coupled with photographs (acquired on the field) and high resolution maps. The Italian glaciation resulted to be spread into 754 ice bodies which altogether were covering 525 km2. Moreover in the Eighties a new inventory was compiled to insert Italian data into the World Glacier Inventory (WGI); aerial photos taken at the end of the Seventies (and in some cases affected by a high and not negligible snow coverage) were used as the main source of data. No other national inventory were compiled after that period. Nevertheless during the last decade the largest part of the Italian Alpine regions have produced regional and local glacier inventories which in several cases are also available and queried through web sites and web GIS application. The actual need is now to obtain a complete, homogeneous and contemporary picture of the Italian Glaciation which encompasses the already available regional and local data and all the new updated information coming from new sources of data (e.g.: orthophotos, satellite imagines, etc..). The challenge was accepted by the University of Milan, the EvK2CNR Committee and the Italian Glaciological Committee who, with the sponsorship of Levissima Spa, are presently working to compile the new updated Italian Glacier Inventory. The first project step is to produce a unique homogeneous glacier database including glacier boundary and surface area and the main fundamental glacier features (following the well-known guidelines of the World Glacier Monitoring Service summarized by Paul et al., 2010). The identification of the Italian glacier bodies and the evaluation of glacier area and main features are performed by analysing aerial orthophotos acquired in the time frame 2007-2012 (pixel size 0.5 m). Moreover the data base will be improved and updated also analysing regional data and by processing and analysing high resolution satellite imagines acquired on the last 2 years. In Lombardy the analysis of the 2007 orthophotos permitted to evaluate a glacier coverage of about 90 km2 of area. This value is about the 75% of the glacier surface area reported for Lombardy glaciers in the Italian Inventory compiled by CGI-CNR in the 1959-62.
12 CFR 1003.5 - Disclosure and reporting.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the data available for inspection and copying during the hours the office is normally open to the... which the loan data are compiled, a financial institution shall send its complete loan/application... Financial Institutions Examination Council (FFIEC) will prepare a disclosure statement from the data each...
12 CFR 1003.5 - Disclosure and reporting.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the data available for inspection and copying during the hours the office is normally open to the... which the loan data are compiled, a financial institution shall send its complete loan/application... Financial Institutions Examination Council (FFIEC) will prepare a disclosure statement from the data each...
12 CFR 1003.5 - Disclosure and reporting.
Code of Federal Regulations, 2012 CFR
2012-01-01
... the data available for inspection and copying during the hours the office is normally open to the... which the loan data are compiled, a financial institution shall send its complete loan/application... Financial Institutions Examination Council (FFIEC) will prepare a disclosure statement from the data each...
Improved trip generation data for Texas using work place and special generator survey data.
DOT National Transportation Integrated Search
2015-05-01
Travel estimates from models and manuals developed from trip attraction rates having high variances due to few : survey observations can reduce confidence and accuracy in estimates. This project compiled and analyzed data from : more than a decade of...
Southern Salish Sea Habitat Map Series data catalog
Cochrane, Guy R.
2015-01-01
This data catalog contains much of the data used to prepare the SIMs in the Southern Salish Sea Habitat Map Series. Other data that were used to prepare the maps were compiled from previously published sources (for example, sediment samples and seismic reflection profiles) and are not included in this data series.
AQUIRE: Aquatic Toxicity Information Retrieval data base. Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, E.; Pilli, A.
The purpose of Aquatic Toxicity Information Retrieval (AQUIRE) data base is to provide scientists and managers quick access to a comprehensive, systematic, computerized compilation of aquatic toxicity data. Scientific papers published both nationally and internationally on the toxicity of chemicals to aquatic organisms and plants are collected and reviewed for AQUIRE. Independently compiled data files that meet AQUIRE parameter and quality assurance criteria are also included. Selected toxicity-test results and related testing information for any individual chemical from laboratory and field aquatic toxicity tests are extracted and added to AQUIRE. Acute, sublethal, and bioconcentration effects are included for tests withmore » freshwater and marine organisms. The total number of data records in AQUIRE now equals 104,500. This includes data from 6000 references, for 5200 chemicals and 2400 test species. A major data file, Acute Toxicity of Organic Chemicals (ATOC), has been incorporated into AQUIRE. The ATOC file contains laboratory acute test data on 525 organic chemicals using juvenile fathead minnows. The complete data file can be accessed by requesting review code 5 as a search parameter.« less
DataTri, a database of American triatomine species occurrence
NASA Astrophysics Data System (ADS)
Ceccarelli, Soledad; Balsalobre, Agustín; Medone, Paula; Cano, María Eugenia; Gurgel Gonçalves, Rodrigo; Feliciangeli, Dora; Vezzani, Darío; Wisnivesky-Colli, Cristina; Gorla, David E.; Marti, Gerardo A.; Rabinovich, Jorge E.
2018-04-01
Trypanosoma cruzi, the causative agent of Chagas disease, is transmitted to mammals - including humans - by insect vectors of the subfamily Triatominae. We present the results of a compilation of triatomine occurrence and complementary ecological data that represents the most complete, integrated and updated database (DataTri) available on triatomine species at a continental scale. This database was assembled by collecting the records of triatomine species published from 1904 to 2017, spanning all American countries with triatomine presence. A total of 21815 georeferenced records were obtained from published literature, personal fieldwork and data provided by colleagues. The data compiled includes 24 American countries, 14 genera and 135 species. From a taxonomic perspective, 67.33% of the records correspond to the genus Triatoma, 20.81% to Panstrongylus, 9.01% to Rhodnius and the remaining 2.85% are distributed among the other 11 triatomine genera. We encourage using DataTri information in various areas, especially to improve knowledge of the geographical distribution of triatomine species and its variations in time.
NASA Astrophysics Data System (ADS)
Melle, W.; Runge, J. A.; Head, E.; Plourde, S.; Castellani, C.; Licandro, P.; Pierson, J.; Jonasdottir, S. H.; Johnson, C.; Broms, C.; Debes, H.; Falkenhaug, T.; Gaard, E.; Gislason, A.; Heath, M. R.; Niehoff, B.; Nielsen, T. G.; Pepin, P.; Stenevik, E. K.; Chust, G.
2014-04-01
Here we present a new, pan-Atlantic compilation of data on key mesozooplankton species, including the possibly most important copepod, Calanus finmarchicus. Distributional data of ten representative zooplankton taxa, from recent (2000-2009) Continuous Plankton Recorder data, are presented, along with basin-scale data of the phytoplankton colour index. Then we present a compilation of data on C. finmarchicus including observations of abundance, demography, egg production and female size with accompanying data on temperature and chlorophyll. This is a contribution by Canadian, European and US scientists and their institutions. http://doi.pangaea.de/10.1594/PANGAEA.820732, http://doi.pangaea.de/10.1594/PANGAEA.824423, http://doi.pangaea.de/10.1594/PANGAEA.828393.
DataTri, a database of American triatomine species occurrence.
Ceccarelli, Soledad; Balsalobre, Agustín; Medone, Paula; Cano, María Eugenia; Gurgel Gonçalves, Rodrigo; Feliciangeli, Dora; Vezzani, Darío; Wisnivesky-Colli, Cristina; Gorla, David E; Marti, Gerardo A; Rabinovich, Jorge E
2018-04-24
Trypanosoma cruzi, the causative agent of Chagas disease, is transmitted to mammals - including humans - by insect vectors of the subfamily Triatominae. We present the results of a compilation of triatomine occurrence and complementary ecological data that represents the most complete, integrated and updated database (DataTri) available on triatomine species at a continental scale. This database was assembled by collecting the records of triatomine species published from 1904 to 2017, spanning all American countries with triatomine presence. A total of 21815 georeferenced records were obtained from published literature, personal fieldwork and data provided by colleagues. The data compiled includes 24 American countries, 14 genera and 135 species. From a taxonomic perspective, 67.33% of the records correspond to the genus Triatoma, 20.81% to Panstrongylus, 9.01% to Rhodnius and the remaining 2.85% are distributed among the other 11 triatomine genera. We encourage using DataTri information in various areas, especially to improve knowledge of the geographical distribution of triatomine species and its variations in time.
IUPAC-NIST Solubility Data Series. 95. Alkaline Earth Carbonates in Aqueous Systems. Part 2. Ca
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Visscher, Alex; Vanderdeelen, Jan; Department of Applied Analytical and Physical Chemistry, Faculty of Bioscience Engineering, Ghent University, B-9000 Ghent
The alkaline earth carbonates are an important class of minerals. This article is part of a volume in the IUPAC-NIST Solubility Data Series that compiles and critically evaluates solubility data of the alkaline earth carbonates in water and in simple aqueous electrolyte solutions. Part 1 outlined the procedure adopted in this volume, and presented the beryllium and magnesium carbonates. Part 2, the current paper, compiles and critically evaluates the solubility data of calcium carbonate. The chemical forms included are the anhydrous CaCO{sub 3} types calcite, aragonite, and vaterite, the monohydrate monohydrocalcite (CaCO{sub 3}{center_dot} H{sub 2}O), the hexahydrate ikaite (CaCO{sub 3}{center_dot}6H{submore » 2}O), and an amorphous form. The data were analyzed with two model variants, and thermodynamic data of each form consistent with each of the models and with the CODATA key values for thermodynamics are presented.« less
1987-06-03
Harris Corp. Harris Ada Compiler, Ver.1.3 Harris HCX-7 6. PERFORMING ORG. REPORT NUMBER 7 AUTH R(s 8. CONTRACT OR GRANT...VALIDATION SUMMARY REPORT : Harris Corporation Harris Ada Compiler, Version 1.3 Harris HCX-7 Completion of On-Site Testing: 3 June 1987 & .. . 0 Prepared...Place NTIS form here + .. . .. . .. .. Ada’ Compiler Validation Summary Report : Compiler Name: Harris Ada Compiler, Version 1.3 Host: Target: Harris
Fortington, Lauren V; Finch, Caroline F
2016-01-01
Background/aim Participation in Australian football (AF) has traditionally been male dominated and current understanding of injury and priorities for prevention are based solely on reports of injuries in male players. There is evidence in other sports that indicates that injury types differ between males and females. With increasing participation in AF by females, it is important to consider their specific injury and prevention needs. This study aimed to provide a first injury profile from existing sources for female AF. Methods Compilation of injury data from four prospectively recorded data sets relating to female AF: (1) hospital admissions in Victoria, 2008/09–13/14, n=500 injuries; (2) emergency department (ED) presentations in Victoria, 2008/09–2012/13, n=1,879 injuries; (3) insurance claims across Australia 2004–2013, n=522 injuries; (4) West Australian Women's Football League (WAWFL), 2014 season club data, n=49 injuries. Descriptive results are presented as injury frequencies, injury types and injury to body parts. Results Hospital admissions and ED presentations were dominated by upper limb injuries, representing 47% and 51% of all injuries, respectively, primarily to the wrist/hand at 32% and 40%. Most (65%) insurance claim injuries involved the lower limb, 27% of which were for knee ligament damage. A high proportion of concussions (33%) were reported in the club-collected data. Conclusions The results provide the first compilation of existing data sets of women's AF injuries and highlight the need for a rigorous and systematic injury surveillance system to be instituted. PMID:27900171
Guidelines for preparation of State water-use estimates for 2015
Bradley, Michael W.
2017-05-01
The U.S. Geological Survey (USGS) has estimated the use of water in the United States at 5-year intervals since 1950. This report describes the water-use categories and data elements used for the national water-use compilation conducted as part of the USGS National Water-Use Science Project. The report identifies sources of water-use information, provides standard methods and techniques for estimating water use at the county level, and outlines steps for preparing documentation for the United States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands.As part of this USGS program to document water use on a national scale, estimates of water withdrawals for the categories of public supply, self-supplied domestic, industrial, irrigation, and thermoelectric power are prepared for each county in each State, District, or territory by using the guidelines in this report. County estimates of water withdrawals for aquaculture, livestock, and mining are prepared for each State by using a county-based national model, although water-use programs in each State or Water Science Center have the option of producing independent county estimates of water withdrawals for these categories. Estimates of water withdrawals and consumptive use for thermoelectric power will be aggregated to the county level for each State by the national project; additionally, irrigation consumptive use at the county level will also be provided, although study chiefs in each State have the option of producing independent county estimates of water withdrawals and consumptive use for these categories.Estimates of deliveries of water from public supplies for domestic use by county also will be prepared for each State. As a result, total domestic water use can be determined for each State by combining self-supplied domestic withdrawals and public-supplied domestic deliveries. Fresh groundwater and surface-water estimates will be prepared for all categories of use, and saline groundwater and surface-water estimates by county will be prepared for the categories of public supply, industrial, mining, and thermoelectric power. Power production for thermoelectric power and irrigated acres by irrigation system type will be compiled. If data are available, reclaimed-wastewater use will be compiled for the public-supply, industrial, mining, thermoelectric-power, and irrigation categories.Optional water-use categories are commercial, hydroelectric power, and wastewater treatment. Optional data elements are public-supply deliveries to commercial, industrial, and thermoelectric-power users; consumptive use (for categories other than thermoelectric power and irrigation); irrigation conveyance loss; and number of facilities. Aggregation of water-use data by stream basin (eight-digit hydrologic unit code) and principal aquifers also is optional.Water-use data compiled by the States will be stored in the USGS Aggregate Water-Use Data System (AWUDS). This database is a comprehensive aggregated database designed to store mandatory and optional data elements. AWUDS contains several routines that can be used for quality assurance and quality control of the data, and AWUDS produces tables of water-use data from the previous compilations.
NASA Astrophysics Data System (ADS)
Christensen, S. W.; Hook, L. A.
2011-12-01
The HIAPER Pole-to-Pole Observations (HIPPO) project is investigating the carbon cycle and greenhouse gases throughout various altitudes in the atmosphere over the Pacific Basin through the annual cycle (Wofsy and the HIPPO Science Team 2011, this session). Aircraft-based data collection occurred during 2009-2011. Data analyses, comparisons, and integration are ongoing. A permanent public archive of HIPPO data has been established at the U. S. DOE Carbon Dioxide Information Analysis Center (CDIAC). Datasets are provided primarily by the Lead Principal Investigator (PI), who draws on a comprehensive set of aircraft navigation information, meteorological measurements, and research instrument and sampling system results from multiple co-investigators to compile integrated and generate value-added products. A website/ftp site has been developed for HIPPO data and metadata (http://hippo.ornl.gov), in coordination with the UCAR website that presents field catalogs and other detailed information about HIPPO missions (http://www.eol.ucar.edu/projects/hippo/dm/). A data policy was adopted that balances the needs of the project investigators with the interests of the scientific user community. A data dictionary was developed to capture the basic characteristics of the hundreds of measurements. Instrument descriptions were compiled. A user's guide is presented for each dataset that also contains data file information enabling users to know when data have been updated. Data are received and provided as space-delimited ASCII files. Metadata records are compiled into a searchable CDIAC index and will be submitted to climate change research data clearinghouses. Each dataset is given a persistent identifier (DOI) to facilitate attribution. We expect that data will continue to be added to the archive for the next year or more. In the future we anticipate creating a database for HIPPO data, with a web interface to facilitate searching and customized data extraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez, L.S.; Marietta, M.G.; Jackson, D.W.
1987-04-01
The US Subseabed Disposal Program has compiled an extensive concentration factor and biological half-life data base from the international marine radioecological literature. A microcomputer-based data management system has been implemented to provide statistical and graphic summaries of these data. The data base is constructed in a manner which allows subsets to be sorted using a number of interstudy variables such as organism category, tissue/organ category, geographic location (for in situ studies), and several laboratory-related conditions (e.g., exposure time and exposure concentration). This report updates earlier reviews and provides summaries of the tabulated data. In addition to the concentration factor/biological half-lifemore » data base, we provide an outline of other published marine radioecological works. Our goal is to present these data in a form that enables those concerned with predictive assessment of radiation dose in the marine environment to make a more judicious selection of data for a given application. 555 refs., 19 figs., 7 tabs.« less
Encounters of aircraft with volcanic ash clouds; A compilation of known incidents, 1953-2009
Guffanti, Marianne; Casadevall, Thomas J.; Budding, Karin
2010-01-01
Information about reported encounters of aircraft with volcanic ash clouds from 1953 through 2009 has been compiled to document the nature and scope of risks to aviation from volcanic activity. The information, gleaned from a variety of published and other sources, is presented in database and spreadsheet formats; the compilation will be updated as additional encounters occur and as new data and corrections come to light. The effects observed by flight crews and extent of aircraft damage vary greatly among incidents, and each incident in the compilation is rated according to a severity index. Of the 129 reported incidents, 94 incidents are confirmed ash encounters, with 79 of those having various degrees of airframe or engine damage; 20 are low-severity events that involve suspected ash or gas clouds; and 15 have data that are insufficient to assess severity. Twenty-six of the damaging encounters involved significant to very severe damage to engines and (or) airframes, including nine encounters with engine shutdown during flight. The average annual rate of damaging encounters since 1976, when reporting picked up, has been approximately 2 per year. Most of the damaging encounters occurred within 24 hours of the onset of ash production or at distances less than 1,000 kilometers from the source volcanoes. The compilation covers only events of relatively short duration for which aircraft were checked for damage soon thereafter; documenting instances of long-term repeated exposure to ash (or sulfate aerosols) will require further investigation. Of 38 source volcanoes, 8 have caused 5 or more encounters, of which the majority were damaging: Augustine (United States), Chaiten (Chile), Mount St. Helens (United States), Pacaya (Guatemala), Pinatubo (Philippines), Redoubt (United States), Sakura-jima (Japan), and Soufriere Hills (Montserrat, Lesser Antilles, United Kingdom). Aircraft have been damaged by eruptions ranging from small, recurring episodes to very large, infrequent events. Moderate-size (Volcanic Explosivity Index 3) eruptions are responsible for nearly half of the damaging encounters. Vigilance is required during the early phases of eruptive activity when data about ash emission may be the most limited and warning capabilities the most strained, yet the risk the greatest. The risk-mitigation strategy for minimizing damaging encounters continues to rely on the combination of real-time volcano monitoring and rapid eruption reporting, detection and tracking of ash clouds in the atmosphere using satellite-based sensors, dispersion modeling to forecast expected ash-cloud movement, and global dissemination of specialized warning messages. To obtain the entire Data Series 545 report, download the text file and appendixes 1-4, which are available as separate files. Click on the links at right. Please Send Updates We hope that publication of this compilation will encourage more reporting of encounters by the aviation industry and civil aviation authorities. We actively seek corrections and additions to the information presented here. Persons who have corrections or additional data pertaining to incidents already in the database or who have data about previously unreported incidents are urged to contact the authors.
Fast computation of close-coupling exchange integrals using polynomials in a tree representation
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Igenbergs, Katharina; Schweinzer, Josef; Aumayr, Friedrich
2011-03-01
The semi-classical atomic-orbital close-coupling method is a well-known approach for the calculation of cross sections in ion-atom collisions. It strongly relies on the fast and stable computation of exchange integrals. We present an upgrade to earlier implementations of the Fourier-transform method. For this purpose, we implement an extensive library for symbolic storage of polynomials, relying on sophisticated tree structures to allow fast manipulation and numerically stable evaluation. Using this library, we considerably speed up creation and computation of exchange integrals. This enables us to compute cross sections for more complex collision systems. Program summaryProgram title: TXINT Catalogue identifier: AEHS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 12 332 No. of bytes in distributed program, including test data, etc.: 157 086 Distribution format: tar.gz Programming language: Fortran 95 Computer: All with a Fortran 95 compiler Operating system: All with a Fortran 95 compiler RAM: Depends heavily on input, usually less than 100 MiB Classification: 16.10 Nature of problem: Analytical calculation of one- and two-center exchange matrix elements for the close-coupling method in the impact parameter model. Solution method: Similar to the code of Hansen and Dubois [1], we use the Fourier-transform method suggested by Shakeshaft [2] to compute the integrals. However, we heavily speed up the calculation using a library for symbolic manipulation of polynomials. Restrictions: We restrict ourselves to a defined collision system in the impact parameter model. Unusual features: A library for symbolic manipulation of polynomials, where polynomials are stored in a space-saving left-child right-sibling binary tree. This provides stable numerical evaluation and fast mutation while maintaining full compatibility with the original code. Additional comments: This program makes heavy use of the new features provided by the Fortran 90 standard, most prominently pointers, derived types and allocatable structures and a small portion of Fortran 95. Only newer compilers support these features. Following compilers support all features needed by the program. GNU Fortran Compiler "gfortran" from version 4.3.0 GNU Fortran 95 Compiler "g95" from version 4.2.0 Intel Fortran Compiler "ifort" from version 11.0
Digital Bedrock Compilation: A Geodatabase Covering Forest Service Lands in California
NASA Astrophysics Data System (ADS)
Elder, D.; de La Fuente, J. A.; Reichert, M.
2010-12-01
This digital database contains bedrock geologic mapping for Forest Service lands within California. This compilation began in 2004 and the first version was completed in 2005. Second publication of this geodatabase was completed in 2010 and filled major gaps in the southern Sierra Nevada and Modoc/Medicine Lake/Warner Mountains areas. This digital map database was compiled from previously published and unpublished geologic mapping, with source mapping and review from California Geological Survey, the U.S. Geological Survey and others. Much of the source data was itself compilation mapping. This geodatabase is huge, containing ~107,000 polygons and ~ 280,000 arcs. Mapping was compiled from more than one thousand individual sources and covers over 41,000,000 acres (~166,000 km2). It was compiled from source maps at various scales - from ~ 1:4,000 to 1:250,000 and represents the best available geologic mapping at largest scale possible. An estimated 70-80% of the source information was digitized from geologic mapping at 1:62,500 scale or better. Forest Service ACT2 Enterprise Team compiled the bedrock mapping and developed a geodatabase to store this information. This geodatabase supports feature classes for polygons (e.g, map units), lines (e.g., contacts, boundaries, faults and structural lines) and points (e.g., orientation data, structural symbology). Lookup tables provide detailed information for feature class items. Lookup/type tables contain legal values and hierarchical groupings for geologic ages and lithologies. Type tables link coded values with descriptions for line and point attributes, such as line type, line location and point type. This digital mapping is at the core of many quantitative analyses and derivative map products. Queries of the database are used to produce maps and to quantify rock types of interest. These include the following: (1) ultramafic rocks - where hazards from naturally occurring asbestos are high, (2) granitic rocks - increased erosion hazards, (3) limestone, chert, sedimentary rocks - paleontological resources (Potential Fossil Yield Classification maps), (4) calcareous rocks (cave resources, water chemistry), and (5) lava flows - lava tubes (more caves). Map unit groupings (e.g., belts, terranes, tectonic & geomorphic provinces) can also be derived from the geodatabase. Digital geologic mapping was used in ground water modeling to predict effects of tunneling through the San Bernardino Mountains. Bedrock mapping is used in models that characterize watershed sediment regimes and quantify anthropogenic influences. When combined with digital geomorphology mapping, this geodatabase helps to assess landslide hazards.
SW New Mexico BHT geothermal gradient calculations
Shari Kelley
2015-07-24
This file contains a compilation of BHT data from oil wells in southwestern New Mexico. Surface temperature is calculated using the collar elevation. An estimate of geothermal gradient is calculated using the estimated surface temperature and the uncorrected BHT data.
5 CFR 890.1307 - Data collection.
Code of Federal Regulations, 2010 CFR
2010-01-01
....1307 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS... Program Demonstration Project § 890.1307 Data collection. Each carrier will compile, maintain, and when... demonstration project. (b) The number of eligible beneficiaries who elected to participate in the demonstration...
1978 Status Report on Aviation and Space Related High School Courses
ERIC Educational Resources Information Center
Journal of Aerospace Education, 1978
1978-01-01
Presents a national compilation of statistical data pertaining to secondary level aviation and aerospace education for the 1977-78 school year. Data include trends and patterns of course structure, design, and operation in table form. (SL)
Engineering Data on Selected High Speed Passenger Trucks
DOT National Transportation Integrated Search
1978-07-01
The purpose of this project is to compile a list of high speed truck engineering parameters for characterization in dynamic performance modeling activities. Data tabulations are supplied for trucks from France, Germany, Italy, England, Japan, U.S.S.R...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-18
... the Data Workshop is a report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The assessment workshop and webinars produce a report...
DOT National Transportation Integrated Search
1994-04-30
The Transportation Legislative Data Base (TLDB) is a computer-based information service containing summaries of federal, state and certain local government statutes and regulations relating to the transportation of radioactive materials in the United...
Technical Data and Reports on Sulfur Dioxide (SO2) Measurements and SIP Status
EPA collects data from the states and regions on their air quality and state implementation plan (SIP) progress. This information is compiled in a database, and used to create reports, trend charts, and maps.
Systematics of hot giant electric dipole resonance widths
NASA Astrophysics Data System (ADS)
Schiller, A.; Thoennessen, M.; McAlpine, K. M.
2007-05-01
Giant Electric Dipole Resonance (GDR) parameters for γ decay to excited states with finite spin and temperature have been compiled by two of the authors ( nucl-ex/0605004). Over 100 original works have been reviewed and from some 70 of them, more than 300 sets of hot GDR parameters for different isotopes, excitation energies, and spin regions have been extracted. All parameter sets have been brought onto a common footing by calculating the equivalent Lorentzian parameters. Together with a complementary compilation by Samuel S. Dietrich and Barry L. Berman [At. Data Nucl. Data Tables 38, 199-338, (1988)] on ground-state photo-neutron and photo-absorption cross sections and their Lorentzian parameters, it is now possible by means of a comparison of the two data sets to shed light on the evolution of GDR parameters with temperature and spin.
The Nasa earth resources spectral information system: A data compilation, first supplement
NASA Technical Reports Server (NTRS)
Leeman, V.
1972-01-01
The NASA Earth Resources Spectral Information System and the information contained therein are described. It is intended to be used as a supplement to the NASA Earth Resources Spectral Information System: A Data Compilation, N72-28366. This supplement includes approximately 500 rock and mineral, 100 soil, and 30 vegetation bidirectional and directional reflectance, transmittance, emittance, and degree-of-polarization curves in the optical region from 0.2 to 22.0 microns. The data have been categorized by subject and each curve plotted on a single graph. For some rocks and minerals, all curves of the same type, differing only in particle size, have been plotted on one grid as a composite plot. Each graph, composite or single, is fully titled to indicate curve source and is indexed by subject to facilitate user retrieval.
Booth, J.S.; O'Leary, Dennis W.; Popenoe, Peter; Robb, James M.; McGregor, B.A.
1988-01-01
Since the initial report on the Grand Banks Slump off southern Newfoundland (Heezen and Ewing, 1952), a large body of data on submarine mass movement along the Atlantic continental margin of the United States and Canada has been published. These data were compiled to provide this distribution map (sheet 1) and tabulation (sheet 2) of "principal facts" on mass movement of the northwest Atlantic Continental Slope. Although we prepared this inventory to facilitate our study of Quaternary mass movement along the U.S. Continental Slope, we judged the compilation to be large enough and detailed enough to be published as a generally useful data source and compendium. Sheet 3 shows examples of mass movement styles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the ``standard`` C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as ``implementation defined.`` Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE`s job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, S.A.
SABrE is a set of tools to facilitate the development of portable scientific software and to visualize scientific data. As with most constructs, SABRE has a foundation. In this case that foundation is SCORE. SCORE (SABRE CORE) has two main functions. The first and perhaps most important is to smooth over the differences between different C implementations and define the parameters which drive most of the conditional compilations in the rest of SABRE. Secondly, it contains several groups of functionality that are used extensively throughout SABRE. Although C is highly standardized now, that has not always been the case. Roughlymore » speaking C compilers fall into three categories: ANSI standard; derivative of the Portable C Compiler (Kernighan and Ritchie); and the rest. SABRE has been successfully ported to many ANSI and PCC systems. It has never been successfully ported to a system in the last category. The reason is mainly that the standard'' C library supplied with such implementations is so far from true ANSI or PCC standard that SABRE would have to include its own version of the standard C library in order to work at all. Even with standardized compilers life is not dead simple. The ANSI standard leaves several crucial points ambiguous as implementation defined.'' Under these conditions one can find significant differences in going from one ANSI standard compiler to another. SCORE's job is to include the requisite standard headers and ensure that certain key standard library functions exist and function correctly (there are bugs in the standard library functions supplied with some compilers) so that, to applications which include the SCORE header(s) and load with SCORE, all C implementations look the same.« less
INDOOR AIR QUALITY DATA BASE FOR ORGANIC COMPOUNDS
The report gives results of the compilation of a data base for concentrations of organic compounds measured indoors. ased on a review of the literature from 1979 through 1990, the data base contains information on over 220 compounds ranging in molecular weight from 30 to 446. he ...
12 CFR 203.5 - Disclosure and reporting.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the data available for inspection and copying during the hours the office is normally open to the... the calendar year for which the loan data are compiled, a financial institution shall send its... disclosure statement from the data each financial institution submits. (2) An institution shall make its...
12 CFR 203.5 - Disclosure and reporting.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the data available for inspection and copying during the hours the office is normally open to the... the calendar year for which the loan data are compiled, a financial institution shall send its... disclosure statement from the data each financial institution submits. (2) An institution shall make its...
12 CFR 203.5 - Disclosure and reporting.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the data available for inspection and copying during the hours the office is normally open to the... the calendar year for which the loan data are compiled, a financial institution shall send its... disclosure statement from the data each financial institution submits. (2) An institution shall make its...
12 CFR 203.5 - Disclosure and reporting.
Code of Federal Regulations, 2012 CFR
2012-01-01
... the data available for inspection and copying during the hours the office is normally open to the... the calendar year for which the loan data are compiled, a financial institution shall send its... disclosure statement from the data each financial institution submits. (2) An institution shall make its...
12 CFR 203.5 - Disclosure and reporting.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the data available for inspection and copying during the hours the office is normally open to the... the calendar year for which the loan data are compiled, a financial institution shall send its... disclosure statement from the data each financial institution submits. (2) An institution shall make its...
DOT National Transportation Integrated Search
2017-11-01
With the emergence of data generated from connected vehicles, connected travelers, and connected infrastructure, the capabilities of traffic management systems or centers (TMCs) will need to be improved to allow agencies to compile and benefit from u...
1987-12-01
requires much more data, but holds fast to the idea that the FV approach, or some other model, is critical if the job analysis process is to have its...Ada compiled code executes twice as fast as Microsoft’s Fortran compiled code. This conclusion is at variance with the results obtained from...finish is not so important. Hence, if a design methodology produces coda that will not execute fast enough on processors suitable for flight
Map showing selected surface-water data for the Alton-Kolob coal-fields area, Utah
Price, Don
1982-01-01
This is one of a series of maps that describe the geology and related natural resources of the Alton-Kolob coal-fields area, Utah. Streamflow records used to compile the map and the following table were collected by the U.S. Geological Survey in cooperation with the Utah Department of Natural Resources, Division of Water Rights, and the Utah Department of Transportation. The principal runoff-producing areas were delineated form a work map (scale 1:250,000) compiled to estimate water yields in Utah (Bagley and others, 1964).
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The purpose of this inventory of power plants is to provide a ready reference for planners whose focus is on the state, standard Federal region, and/or national level. Thus the inventory is compiled alphabetically by state within standard Federal regions. The units are listed alphabetically within electric utility systems which in turn are listed alphabetically within states. The locations are identified to county level according to the Federal Information Processing Standards Publication Counties and County Equivalents of the States of the United States. Data compiled include existing and projected electrical generation units, jointly owned units, and projected construction units.
Compilation of Information on the Transonic Attachment of Flows at the Leading Edges of Airfoils
NASA Technical Reports Server (NTRS)
Lindsey, Walter F; Landrum, Emma Jean
1958-01-01
Schlieren photographs have been compiled of the two-dimensional flow at transonic speeds past 37 airfoils. These airfoils have variously shaped profiles, and some are related in thickness and camber. The data for these airfoils were analyzed to provide basic information on the flow changes involved and to determine factors affecting transonic-flow attachment, which is a transition from separated to unseparated flow at the leading edges of two-dimensional airfoils at fixed angles as the subsonic Mach number is increased.
Continuous-time quantum Monte Carlo impurity solvers
NASA Astrophysics Data System (ADS)
Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias
2011-04-01
Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as representations of quantum dots and molecular conductors and play an increasingly important role in the theory of "correlated electron" materials as auxiliary problems whose solution gives the "dynamical mean field" approximation to the self-energy and local correlation functions. Solution method: Quantum impurity models require a method of solution which provides access to both high and low energy scales and is effective for wide classes of physically realistic models. The continuous-time quantum Monte Carlo algorithms for which we present implementations here meet this challenge. Continuous-time quantum impurity methods are based on partition function expansions of quantum impurity models that are stochastically sampled to all orders using diagrammatic quantum Monte Carlo techniques. For a review of quantum impurity models and their applications and of continuous-time quantum Monte Carlo methods for impurity models we refer the reader to [2]. Additional comments: Use of dmft requires citation of this paper. Use of any ALPS program requires citation of the ALPS [1] paper. Running time: 60 s-8 h per iteration.
Review and history of photon cross section calculations.
Hubbell, J H
2006-07-07
Photon (x-ray, gamma-ray, bremsstrahlung) mass attenuation coefficients, mu/rho, are among the most widely used physical parameters employed in medical diagnostic and therapy computations, as well as in diverse applications in other fields such as nuclear power plant shielding, health physics and industrial irradiation and monitoring, and in x-ray crystallography. This review traces the evolution of this data base from its empirical beginnings totally derived from measurements beginning in 1907 by Barkla and Sadler and continuing up through the 1935 Allen compilation (published virtually unchanged in all editions up through 1971-1972 of the Chemical Rubber Handbook), to the 1949 semi-empirical compilation of Victoreen, as our theoretical understanding of the constituent Compton scattering, photoabsorption and pair production interactions of photons with atoms became more quantitative. The 1950s saw the advent of completely theoretical (guided by available measured data) systematic compilations such as in the works of Davisson and Evans, and by White-Grodstein under the direction of Fano, using mostly theory developed in the 1930s (pre-World War II) by Sauter, Bethe, Heitler and others. Post-World War II new theoretical activity, and the introduction of the electronic automatic computer, led to the more extensive and more accurate compilations in the 1960s and 1970s by Storm and Israel, and by Berger and Hubbell. Today's mu/rho compilations by Cullen et al, by Seltzer, Berger and Hubbell, and by others, collectively spanning the ten decades of photon energy from 10 eV to 100 GeV, for all elements Z= 1 to 100, draw heavily on the 1970s shell-by-shell photoabsorption computations of Scofield, the 1960s coherent and incoherent scattering computations of Cromer et al, and the 1980 computations of electron-positron pair and triplet computations of Hubbell, Gimm and Øverbø, these names being representative of the vast legions of other researchers whose work fed into these computations.
REVIEW: Review and history of photon cross section calculations
NASA Astrophysics Data System (ADS)
Hubbell, J. H.
2006-07-01
Photon (x-ray, gamma-ray, bremsstrahlung) mass attenuation coefficients, μ/ρ, are among the most widely used physical parameters employed in medical diagnostic and therapy computations, as well as in diverse applications in other fields such as nuclear power plant shielding, health physics and industrial irradiation and monitoring, and in x-ray crystallography. This review traces the evolution of this data base from its empirical beginnings totally derived from measurements beginning in 1907 by Barkla and Sadler and continuing up through the 1935 Allen compilation (published virtually unchanged in all editions up through 1971-1972 of the Chemical Rubber Handbook), to the 1949 semi-empirical compilation of Victoreen, as our theoretical understanding of the constituent Compton scattering, photoabsorption and pair production interactions of photons with atoms became more quantitative. The 1950s saw the advent of completely theoretical (guided by available measured data) systematic compilations such as in the works of Davisson and Evans, and by White-Grodstein under the direction of Fano, using mostly theory developed in the 1930s (pre-World War II) by Sauter, Bethe, Heitler and others. Post-World War II new theoretical activity, and the introduction of the electronic automatic computer, led to the more extensive and more accurate compilations in the 1960s and 1970s by Storm and Israel, and by Berger and Hubbell. Today's μ/ρ compilations by Cullen et al, by Seltzer, Berger and Hubbell, and by others, collectively spanning the ten decades of photon energy from 10 eV to 100 GeV, for all elements Z= 1 to 100, draw heavily on the 1970s shell-by-shell photoabsorption computations of Scofield, the 1960s coherent and incoherent scattering computations of Cromer et al, and the 1980 computations of electron-positron pair and triplet computations of Hubbell, Gimm and Øverbø, these names being representative of the vast legions of other researchers whose work fed into these computations. Work supported by the National Institute of Standards and Technology.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., magnetic impulse, mechanical or electronic recording, or other form of data compilation. “Photographs... plan, the principles and methods underlying the study, all relevant assumptions, all variables...
REPORTING OF SELECTED EPA-SPONSORED FORM EIA-767 DATA FOR 1985-1987
The report is a compilation of 1985-87 data based on the Energy Information Administration's (ElA's) Form 767, "Steam-Electric Plant Operation and Design Report," an annual data collection form whose major output to date has been an annual data tape which is released to the publi...
ERIC Educational Resources Information Center
Energy Information Administration (DOE), Washington, DC.
This booklet is a compilation of energy data providing a reference to a much broader range of domestic and international energy data. It is designed especially as a quick reference to major facts about energy. The data includes information for 1976 through 1988, except for international energy data, which is for 1977 through 1987. Graphs, charts,…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-02
... is a data report which compiles and evaluates potential datasets and recommends which datasets are... add additional data points to datasets incorporated in the original SEDAR benchmark assessment and run... Conference Call Using updated datasets adopted during the Data Webinar, participants will employ assessment...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-05
... product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Stock Assessment... conducted through the SEDAR program. Update assessments add additional data points to datasets incorporated...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-22
... the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Stock Assessment Workshop is... conducted through the SEDAR program. Update assessments add additional data points to datasets incorporated...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-22
... Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Assessment Process is a stock... agencies.SEDAR 22 Assessment webinars 1: Using datasets recommended from the Data Workshop, participants...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-14
... Workshop. The product of the Data Workshop is a data report which compiles and evaluates potential datasets and recommends which datasets are appropriate for assessment analyses. The product of the Stock....m. Using datasets provided by the Data Workshop, participants will develop population models to...
77 FR 57099 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-17
.... Project: Treatment Episode Data Set (TEDS)--New The Treatment Episode Data Set (TEDS) is a compilation of client- level substance abuse treatment admission and discharge data submitted by States on clients... approved as part of the DASIS data collection (OMB No. 0930-0106). SAMHSA is now requesting OMB approval...
77 FR 40077 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Project: Treatment Episode Data Set (TEDS)--New The Treatment Episode Data Set (TEDS) is a compilation of client- level substance abuse treatment admission and discharge data submitted by States on clients... approved as part of the DASIS data collection (OMB No. 0930-0106). SAMHSA is now requesting OMB approval...
Wilderness visitors and recreation impacts: baseline data available for twentieth century conditions
David N. Cole; Vita Wright
2003-01-01
This report provides an assessment and compilation of recreation-related monitoring data sources across the National Wilderness Preservation System (NWPS). Telephone interviews with managers of all units of the NWPS and a literature search were conducted to locate studies that provide campsite impact data, trail impact data, and information about visitor...
Global Seismicity: Three New Maps Compiled with Geographic Information Systems
NASA Technical Reports Server (NTRS)
Lowman, Paul D., Jr.; Montgomery, Brian C.
1996-01-01
This paper presents three new maps of global seismicity compiled from NOAA digital data, covering the interval 1963-1998, with three different magnitude ranges (mb): greater than 3.5, less than 3.5, and all detectable magnitudes. A commercially available geographic information system (GIS) was used as the database manager. Epicenter locations were acquired from a CD-ROM supplied by the National Geophysical Data Center. A methodology is presented that can be followed by general users. The implications of the maps are discussed, including the limitations of conventional plate models, and the different tectonic behavior of continental vs. oceanic lithosphere. Several little-known areas of intraplate or passive margin seismicity are also discussed, possibly expressing horizontal compression generated by ridge push.
Partial compilation and revision of basic data in the WATEQ programs
Nordstrom, D. Kirk; Valentine, S.D.; Ball, J.W.; Plummer, Niel; Jones, B.F.
1984-01-01
Several portions of the basic data in the WATEQ series of computer programs (WATEQ, WATEQF, WATEQ2, WATEQ3, and PHREEQE) are compiled. The density and dielectric constant of water and their temperature dependence are evaluated for the purpose of updating the Debye-Huckel solvent parameters in the activity coefficient equations. The standard state thermodynamic properties of the Fe2+ and Fe3+ aqueous ions are refined. The main portion of this report is a comprehensive listing of aluminum hydrolysis constants, aluminum fluoride, aluminum sulfate, calcium chloride, magnesium chloride, potassium sulfate and sodium sulfate stability constants, solubility product constants for gibbsite and amorphous aluminum hydroxide, and the standard electrode potentials for Fe (s)/Fe2+(aq) and Fe2 +(aq)/Fe3+(aq). (USGS)
A catalogue of photometric sequences (suppl. 3). [for astronomical photograph calibration
NASA Technical Reports Server (NTRS)
Argue, A. N.; Miller, E. W.; Warren, W. H., Jr.
1983-01-01
In stellar photometry studies, certain difficulties have arisen because of the lack of suitable photometric sequences for calibrating astronomical photographs. In order to eliminate these difficulties, active observers were contacted with a view to drawing up lists of suitable sequences. Replies from 63 authors offering data on 412 sequences were received. Most data were in the UBV system and had been obtained between 1968 and 1973. These were included in the original catalogue. The Catalogue represents a continuation of the earlier Photometric Catalogue compiled by Sharov and Jakimova (1970). A small supplement containing 69 sequences was issued in 1973. Supplement 2 was produced in 1976 and contained 320 sequences. Supplement 3 has now been compiled. It contains 1271 sequences.
Winterstein, T.A.
1982-01-01
This inventory of reports and data concerning the Mississippi and Minnesota Rivers in the Twin Cities metropolitan area was compiled from November 1981 through January 1982 for a planned river-quality assessment to be conducted cooperatively by the U.S. Geological Survey and the Metropolitan Waste Control Commission. There are 260 annotated citations: 176 citations of reports; 8 citations of computer models that have been used to model either or both rivers; and 76 citations of data in reports , in field notes, lab sheets, or handwritten tabulations, and in computer data bases. Citations of all the reports and data located that might conceivably be useful in understanding and interpreting the biological and chemical quality of the Mississippi and Minnesota Rivers in the past, present, or future were included. The accuracy of the citations was not verified and secondary sources, such as other annotated bibliographies, were used in the compilation of this inventory.
A large-scale solar dynamics observatory image dataset for computer vision applications.
Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A
2017-01-01
The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.
Final Technical Report for Award DE-FG02-98ER41080
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Alan
The prime motivation of the project at McMaster University was to carry out the critical evaluation and compilation of Nuclear Structure and Decay data, and of nuclear astrophysics data with continued participation in the United States Nuclear Data Program (US-NDP). A large body of evaluated and compiled structure data were supplied for databases such as ENSDF, XUNDL, NSR, etc. residing on webpage of National Nuclear Data Center of the Brookhaven National Laboratory, Upton, New York, USA. Thermonuclear reaction rates of importance to stellar explosions, such as novae, x-ray bursts and supernovae, were evaluated as well. This effort was closely coupledmore » to our ongoing experimental effort, which took advantage of radioactive ion beam and stable beam facilities worldwide to study these key reaction rates. This report contains brief descriptions of the various activities together with references to all the publications in peer-reviewed journals which were the result of work carried out with the award DE-FG02-98-ER41080, during 1998-2013.« less
Schalk, C.W.; Tertuliani, J.S.; Darner, R.A.
1999-01-01
Potential wetlands in training areas on Ravenna Army Ammunition Plant, Ohio, were mapped by use of geographic information system (GIS) data layers and field inspection. The GIS data layers were compiled from existing sources and interpretation of aerial photography. Data layers used in the GIS analysis were wetland-plant communities, hydric soils, National Wetlands Inventory designated areas, and wet areas based on photogrammetry. According to review of these data layers, potential wetlands constitute almost one-third of the land in the training areas. A composite map of these four data layers was compiled for use during inspection of the training areas. Field inspection focused on the presence of hydrophytic vegetation and macroscopic evidences of wetland hydrology. Results of the field inspection were in general agreement with those predicted by the GIS analysis, except that some wet areas were more extensive than predicted because of high amounts of precipitation during critical periods of 1995 and 1996. Guidelines for managing wetlands in the training areas are presented.
Ten years in the library: new data confirm paleontological patterns
NASA Technical Reports Server (NTRS)
Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)
1993-01-01
A comparison is made between compilations of times of origination and extinction of fossil marine animal families published in 1982 and 1992. As a result of ten years of library research, half of the information in the compendia has changed: families have been added and deleted, low-resolution stratigraphic data been improved, and intervals of origination and extinction have been altered. Despite these changes, apparent macroevolutionary patterns for the entire marine fauna have remained constant. Diversity curves compiled from the two data bases are very similar, with a goodness-of-fit of 99%; the principal difference is that the 1992 curve averages 13% higher than the older curve. Both numbers and percentages of origination and extinction also match well, with fits ranging from 83% to 95%. All major events of radiation and extinction are identical. Therefore, errors in large paleontological data bases and arbitrariness of included taxa are not necessarily impediments to the analysis of pattern in the fossil record, so long as the data are sufficiently numerous.
Energy Use and Other Comparisons Between Diesel and Gasoline Trucks
DOT National Transportation Integrated Search
1977-02-01
This report presents fuel consumption and other data on comparable diesel and gasoline trucks. The data was compiled from actual, operational records of the Maine Department of Transportation for trucks of about 24,000 pounds gross vehicle weight and...
ERIC Educational Resources Information Center
Health Resources and Services Administration (DHHS/PHS), Washington, DC. Maternal and Child Health Bureau.
Intended to inform policymaking in the public and private sectors, this booklet compiles secondary data for 55 health status indicators. The book provides both graphical and textual summaries of data, and addresses long-term trends where applicable. Data are presented for the target populations of Title V funding: infants, children, adolescents,…
ERIC Educational Resources Information Center
Health Resources and Services Administration (DHHS/PHS), Washington, DC. Maternal and Child Health Bureau.
Intended to inform policymaking in the public and private sectors, this booklet compiles secondary data for 54 health status indicators. The book provides both graphical and textual summaries of data, and addresses long-term trends where applicable. Data are presented for the target populations of Title V funding: infants, children, adolescents,…
ERIC Educational Resources Information Center
Health Resources and Services Administration (DHHS/PHS), Washington, DC. Maternal and Child Health Bureau.
Intended to inform policymaking in the public and private sectors, this booklet compiles secondary data for 59 health status indicators. The book provides both graphical and textual summaries of data and addresses long-term trends where applicable. Data are presented for the target populations of Title V funding: infants, children, adolescents,…
Detailed requirements for a next generation nuclear data structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, D.
2016-07-05
This document attempts to compile the requirements for the top-levels of a hierarchical arrangement of nuclear data such as found in the ENDF format. This set of requirements will be used to guide the development of a new data structure to replace the legacy ENDF format.
Aerothermal modeling, phase 1. Volume 2: Experimental data
NASA Technical Reports Server (NTRS)
Kenworthy, M. J.; Correa, S. M.; Burrus, D. L.
1983-01-01
The experimental test effort is discussed. The test data are presented. The compilation is divided into sets representing each of the 18 experimental configurations tested. A detailed description of each configuration, and plots of the temperature difference ratio parameter or pattern factor parameter calculated from the test data are also provided.
Code of Federal Regulations, 2011 CFR
2011-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
Code of Federal Regulations, 2013 CFR
2013-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
Code of Federal Regulations, 2010 CFR
2010-04-01
... integrity of the data can be verified. (6) Electronic record means any combination of text, graphics, data... computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an... name or mark. (9) Open system means an environment in which system access is not controlled by persons...
Scientific Manpower: Volume Compiles Data, Maps Trends.
ERIC Educational Resources Information Center
Chemical and Engineering News, 1985
1985-01-01
Presents highlights from "The Technological Marketplace: Supply and Demand for Scientists and Engineers," a report which provides a synthesis of data found in 50 other reports. In addition, these data are analyzed and trends pointed out for such fields as chemistry, chemical engineering, and other science/engineering fields. (JN)
ERIC Educational Resources Information Center
Health Resources and Services Administration (DHHS/PHS), Washington, DC. Maternal and Child Health Bureau.
Published to provide reliable and current data for public health professionals and other individuals in the public and private sector to inform policymaking, this book compiles secondary data for 50 health status indicators and service needs of America's children. The book provides both a graphic and textual summary of the data and addresses…
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2011 CFR
2011-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2010 CFR
2010-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
42 CFR 82.10 - Overview of the dose reconstruction process.
Code of Federal Regulations, 2013 CFR
2013-10-01
... doses using techniques discussed in § 82.16. Once the resulting data set is complete, NIOSH will.... Additionally, NIOSH may compile data, and information from NIOSH records that may contribute to the dose... which dose and exposure monitoring data is incomplete or insufficient for dose reconstruction. (h) NIOSH...
Interspecies correlation estimation (ICE) models were developed for 30 nonpolar aromatic compounds to allow comparison of prediction accuracy between 2 data compilation approaches. Type 1 models used data combined across studies, and type 2 models used data combined only within s...
VizieR Online Data Catalog: VIc photometry of IR-excess stars in NGC6611 (De Marchi+ 2013)
NASA Astrophysics Data System (ADS)
de Marchi, G.; Panagia, N.; Guarcello, M. G.; Bonito, R.
2014-10-01
The data analysed in this work were extracted from the multiband photometric catalogue of NGC 6611 and of the surrounding M 16 cloud compiled by Guarcello et al. (2010, Cat. J/A+A/521/A61). (1 data file).
Vital Statistics for Ohio Appalachian School Districts, Fiscal Year 1999.
ERIC Educational Resources Information Center
Ohio Univ., Athens. Coalition of Rural and Appalachian Schools.
This document compiles school district data on 18 factors for the 29 southeastern Ohio counties designated as "Appalachian." Data tables present state means, Appalachian means and ranges, and individual district data for fall enrollment; percentage of minority students; percentage of Aid to Dependent Children; average income; property…
Buckovich, S A; Rippen, H E; Rozen, M J
1999-01-01
As health care moves from paper to electronic data collection, providing easier access and dissemination of health information, the development of guiding privacy, confidentiality, and security principles is necessary to help balance the protection of patients' privacy interests against appropriate information access. A comparative review and analysis was done, based on a compilation of privacy, confidentiality, and security principles from many sources. Principles derived from ten identified sources were compared with each of the compiled principles to assess support level, uniformity, and inconsistencies. Of 28 compiled principles, 23 were supported by at least 50 percent of the sources. Technology could address at least 12 of the principles. Notable consistencies among the principles could provide a basis for consensus for further legislative and organizational work. It is imperative that all participants in our health care system work actively toward a viable resolution of this information privacy debate.
Driving Toward Guiding Principles
Buckovich, Suzy A.; Rippen, Helga E.; Rozen, Michael J.
1999-01-01
As health care moves from paper to electronic data collection, providing easier access and dissemination of health information, the development of guiding privacy, confidentiality, and security principles is necessary to help balance the protection of patients' privacy interests against appropriate information access. A comparative review and analysis was done, based on a compilation of privacy, confidentiality, and security principles from many sources. Principles derived from ten identified sources were compared with each of the compiled principles to assess support level, uniformity, and inconsistencies. Of 28 compiled principles, 23 were supported by at least 50 percent of the sources. Technology could address at least 12 of the principles. Notable consistencies among the principles could provide a basis for consensus for further legislative and organizational work. It is imperative that all participants in our health care system work actively toward a viable resolution of this information privacy debate. PMID:10094065
Testing-Based Compiler Validation for Synchronous Languages
NASA Technical Reports Server (NTRS)
Garoche, Pierre-Loic; Howar, Falk; Kahsai, Temesghen; Thirioux, Xavier
2014-01-01
In this paper we present a novel lightweight approach to validate compilers for synchronous languages. Instead of verifying a compiler for all input programs or providing a fixed suite of regression tests, we extend the compiler to generate a test-suite with high behavioral coverage and geared towards discovery of faults for every compiled artifact. We have implemented and evaluated our approach using a compiler from Lustre to C.
NASA Astrophysics Data System (ADS)
van Heteren, Sytze; Moses, Cherith; van der Ven, Tamara
2017-04-01
EMODnet has changed the face of the European marine data landscape and is developing tools to connect national data and information resources to make them easily available for multiple users, for multiple purposes. Building on the results of EUROSION, an EU-project completed some ten years ago, EMODnet-Geology has been compiling coastal erosion and sedimentation data and information for all European shorelines. Coverage is being expanded, and data and information are being updated. Challenges faced during this compilation phase are posed by a) differences between parameters used as indicators of shoreline migration, b) restricted access to third-party data, and c) data gaps. There are many indicators of coastal behaviour, with inherent incompatibilities and variations between low-lying sediment and cliffed rock shorelines. Regionally, low data availability and limited access result in poor coverage. With Sentinel data expected to become increasingly available, it is time to invest in automated methods to derive coastal-erosion data from satellite monitoring. Even so, consistency of data and derived information on coastal erosion and accretion does not necessarily translate into usability in pan-European coastal-zone management. Indicators of shoreline change need to be assessed and weighted regionally in light of other parameters in order to be of value in assessing coastal resilience or vulnerability. There is no single way to portray coastal vulnerability for all of Europe in a meaningful way. A common legend, however attractive intuitively, results in data products that work well for one region but show insufficient or excessive detail elsewhere. For decision making, uniform products are often not very helpful. The ability to zoom in on different spatial levels is not a solution either. It is better to compile and visualize vulnerability studies with different legends, and to provide each map with a confidence assessment and other relevant metadata.
Geologic map of the Priest Rapids 1:100,000 quadrangle, Washington
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reidel, S.P.; Fecht, K.R.
1993-09-01
This map of the Priest Rapids 1:100,000-scale quadrangle, Washington, shows the geology of one of fifteen complete or partial 1:100,000-scale quadrangles that cover the southeast quadrant of Washington. Geologic maps of these quadrangles have been compiled by geologists with the Washington Division of Geology and Earth Resources (DGER) and Washington State University and are the principal data sources for a 1:250,000scale geologic map of the southeast quadrant of Washington, which is in preparation. Eleven of those quadrangles are being released as DGER open-file reports (listed below). The map of the Wenatchee quadrangle has been published by the US Geological Surveymore » (Tabor and others, 1982), and the Moses Lake (Gulick, 1990a), Ritzville (Gulick, 1990b), and Rosalia (Waggoner, 1990) quadrangles have already been released. The geology of the Priest Rapids quadrangle has not previously been compiled at 1:100,000 scale. Furthermore, this is the first 1:100,000 or smaller scale geologic map of the area to incorporate both bedrock and surficial geology. This map was compiled in 1992, using published and unpublished geologic maps as sources of data.« less
Compiled records of carbon isotopes in atmospheric CO2 for historical simulations in CMIP6
NASA Astrophysics Data System (ADS)
Graven, Heather; Allison, Colin E.; Etheridge, David M.; Hammer, Samuel; Keeling, Ralph F.; Levin, Ingeborg; Meijer, Harro A. J.; Rubino, Mauro; Tans, Pieter P.; Trudinger, Cathy M.; Vaughn, Bruce H.; White, James W. C.
2017-12-01
The isotopic composition of carbon (Δ14C and δ13C) in atmospheric CO2 and in oceanic and terrestrial carbon reservoirs is influenced by anthropogenic emissions and by natural carbon exchanges, which can respond to and drive changes in climate. Simulations of 14C and 13C in the ocean and terrestrial components of Earth system models (ESMs) present opportunities for model evaluation and for investigation of carbon cycling, including anthropogenic CO2 emissions and uptake. The use of carbon isotopes in novel evaluation of the ESMs' component ocean and terrestrial biosphere models and in new analyses of historical changes may improve predictions of future changes in the carbon cycle and climate system. We compile existing data to produce records of Δ14C and δ13C in atmospheric CO2 for the historical period 1850-2015. The primary motivation for this compilation is to provide the atmospheric boundary condition for historical simulations in the Coupled Model Intercomparison Project 6 (CMIP6) for models simulating carbon isotopes in the ocean or terrestrial biosphere. The data may also be useful for other carbon cycle modelling activities.
Interpretation, compilation and field verification procedures in the CARETS project
Alexander, Robert H.; De Forth, Peter W.; Fitzpatrick, Katherine A.; Lins, Harry F.; McGinty, Herbert K.
1975-01-01
The production of the CARETS map data base involved the development of a series of procedures for interpreting, compiling, and verifying data obtained from remote sensor sources. Level II land use mapping from high-altitude aircraft photography at a scale of 1:100,000 required production of a photomosaic mapping base for each of the 48, 50 x 50 km sheets, and the interpretation and coding of land use polygons on drafting film overlays. CARETS researchers also produced a series of 1970 to 1972 land use change overlays, using the 1970 land use maps and 1972 high-altitude aircraft photography. To enhance the value of the land use sheets, researchers compiled series of overlays showing cultural features, county boundaries and census tracts, surface geology, and drainage basins. In producing Level I land use maps from Landsat imagery, at a scale of 1:250,000, interpreters overlaid drafting film directly on Landsat color composite transparencies and interpreted on the film. They found that such interpretation involves pattern and spectral signature recognition. In studies using Landsat imagery, interpreters identified numerous areas of change but also identified extensive areas of "false change," where Landsat spectral signatures but not land use had changed.
Sweetkind, Donald S.
2017-09-08
As part of a U.S. Geological Survey study in cooperation with the Bureau of Reclamation, a digital three-dimensional hydrogeologic framework model was constructed for the Rio Grande transboundary region of New Mexico and Texas, USA, and northern Chihuahua, Mexico. This model was constructed to define the aquifer system geometry and subsurface lithologic characteristics and distribution for use in a regional numerical hydrologic model. The model includes five hydrostratigraphic units: river channel alluvium, three informal subdivisions of Santa Fe Group basin fill, and an undivided pre-Santa Fe Group bedrock unit. Model input data were compiled from published cross sections, well data, structure contour maps, selected geophysical data, and contiguous compilations of surficial geology and structural features in the study area. These data were used to construct faulted surfaces that represent the upper and lower subsurface hydrostratigraphic unit boundaries. The digital three-dimensional hydrogeologic framework model is constructed through combining faults, the elevation of the tops of each hydrostratigraphic unit, and boundary lines depicting the subsurface extent of each hydrostratigraphic unit. The framework also compiles a digital representation of the distribution of sedimentary facies within each hydrostratigraphic unit. The digital three-dimensional hydrogeologic model reproduces with reasonable accuracy the previously published subsurface hydrogeologic conceptualization of the aquifer system and represents the large-scale geometry of the subsurface aquifers. The model is at a scale and resolution appropriate for use as the foundation for a numerical hydrologic model of the study area.
The HITRAN2016 molecular spectroscopic database
NASA Astrophysics Data System (ADS)
Gordon, I. E.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.; Bernath, P. F.; Birk, M.; Boudon, V.; Campargue, A.; Chance, K. V.; Drouin, B. J.; Flaud, J.-M.; Gamache, R. R.; Hodges, J. T.; Jacquemart, D.; Perevalov, V. I.; Perrin, A.; Shine, K. P.; Smith, M.-A. H.; Tennyson, J.; Toon, G. C.; Tran, H.; Tyuterev, V. G.; Barbe, A.; Császár, A. G.; Devi, V. M.; Furtenbacher, T.; Harrison, J. J.; Hartmann, J.-M.; Jolly, A.; Johnson, T. J.; Karman, T.; Kleiner, I.; Kyuberis, A. A.; Loos, J.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S. N.; Moazzen-Ahmadi, N.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A. V.; Polyansky, O. L.; Rey, M.; Rotger, M.; Sharpe, S. W.; Sung, K.; Starikova, E.; Tashkun, S. A.; Auwera, J. Vander; Wagner, G.; Wilzewski, J.; Wcisło, P.; Yu, S.; Zak, E. J.
2017-12-01
This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. A powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.
Water withdrawals reduce native fish diversity across the sunbelt of the US
NASA Astrophysics Data System (ADS)
Sabo, J. L.; Bowling, L. C.; Roath, J.; Sinha, T.; Kominoski, J.; Fuller, P.
2012-12-01
Water withdrawals for urban, industrial and agricultural uses are known to have negative effects on freshwater biodiversity, but this conclusion is based largely on a small number of place based studies. In this talk we will present results from a continental scale analysis of water withdrawals on the species richness of native and non-native fishes in the coterminous US. To do this we compiled data from the USGS on water withdrawals and the species richness of non-native fishes. We obtained data on the native fish species richness from NatureServe's native fish database. We also compiled spatial data on cropland area and urban impervious surfaces. Finally, we used gridded estimates of streamflow from the Variable Infiltration Capacity model and a routing model to estimate streamflow (less upstream water withdrawal). We estimate the water stress index (WSI) as withdrawals standardized by streamflow (local and upstream deliveries) and use this as a metric of sustainability of human water use. All data were compiled at the spatial resolution of 8-digit hydrologic unit code hydrologic accounting units. Our key finding is that human water use (WSI)--and not impervious surfaces or cropland area--has a strong negative effect on native, but not non-native biodiversity in rivers. This result was robust across the US sunbelt but weaker across the coterminous US. Our result suggests that the effects of cities and farms on native freshwater fauna are outweighed by the upstream and cross-basin extraction of water to support these land uses.
NASA Astrophysics Data System (ADS)
Kitt, S.; Grothkopf, U.
2010-10-01
This paper explains the procedures involved in creating a database of scientific papers that use observational data and linking the records to the observations residing in a data archive. Based on our experiences with the ESO Telescope Bibliography, we describe the workflow we apply in order to retrieve relevant articles, assign tags to describe the observing facilities that generated the data, and identify the correct program identification numbers (IDs). These program identifiers are particularly important as they link the published papers and the underlying data and enable scientists to access the data for new studies. With the understanding that the difficulty of compiling correct and complete data varies, depending on the information readily provided in the published literature, this paper proposes an evolution of search options for finding appropriate ID numbers. To explore the process and its various stages, we use the analogy of the "cookbook." These search methodologies might be labeled fast, medium, and slow heat recipes within our culinary theme. We provide a step-by-step guide in order to assist other bibliography compilers, in particular those who are new to the field.
HAL/S-FC compiler system specifications
NASA Technical Reports Server (NTRS)
1976-01-01
This document specifies the informational interfaces within the HAL/S-FC compiler, and between the compiler and the external environment. This Compiler System Specification is for the HAL/S-FC compiler and its associated run time facilities which implement the full HAL/S language. The HAL/S-FC compiler is designed to operate stand-alone on any compatible IBM 360/370 computer and within the Software Development Laboratory (SDL) at NASA/JSC, Houston, Texas.
Compiling quantum circuits to realistic hardware architectures using temporal planners
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Do, Minh; Rieffel, Eleanor; Frank, Jeremy
2018-04-01
To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.
77 FR 8082 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... which the Commission's staff completed compilation of the data. To the extent possible, rulemaking... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
78 FR 1708 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-08
... which the Commission's staff completed compilation of the data. To the extent possible, rulemaking... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
78 FR 44407 - Regulatory Flexibility Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-23
... the Commission's staff completed compilation of the data. To the extent possible, rulemaking actions... indicated that preparation of a Regulatory Flexibility Act analysis is required. The Commission's complete...
Compilation of Water-Resources Data for Montana, Water Year 2006
Ladd, P. B.; Berkas, W.R.; White, M.K.; Dodge, K.A.; Bailey, F.A.
2007-01-01
The U.S. Geological Survey, Montana Water Science Center, in cooperation with other Federal, State, and local agencies, and Tribal governments, collects a large amount of data pertaining to the water resources of Montana each water year. This report is a compilation of Montana site-data sheets for the 2006 water year, which consists of records of stage and discharge of streams; water quality of streams and ground water; stage and contents of lakes and reservoirs; water levels in wells; and precipitation data. Site-data sheets for selected stations in Canada and Wyoming also are included in this report. The data for Montana, along with data from various parts of the Nation, are included in 'Water-Resources Data for the United States, Water Year 2006', which is published as U.S. Geological Survey Water-Data Report WDR-US-2006 and is available at http://pubs.water.usgs.gov/wdr2006. Additional water year 2006 data collected at crest-stage gage and miscellaneous-measurement stations were collected but were not published. These data are stored in files of the U.S. Geological Survey Montana Water Science Center in Helena, Montana, and are available on request.
Residual Time to 25 Bee Mortality (RT25) Data
Values in this table were compiled from data for the Honey Bee (Apis mellifera) Toxicity of Residues on Foliage study: a lab test designed to determine the length of time over which field weathered foliar residues remain toxic to honey bees.
Code of Federal Regulations, 2014 CFR
2014-07-01
... methods employed in statistical compilations. The principal title of each exhibit should state what it... furnished: (i) Market research. (a) The following data and information shall be provided: (1) A clear and detailed description of the sample, observational, and data preparation designs, including definitions of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... item of information used and the methods employed in statistical compilations. The principal title of... furnished: (i) Market research. (a) The following data and information shall be provided: (1) A clear and detailed description of the sample, observational, and data preparation designs, including definitions of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... item of information used and the methods employed in statistical compilations. The principal title of... should be furnished: (i) Market research. (a) The following data and information shall be provided: (1) A clear and detailed description of the sample, observational, and data preparation designs, including...
Code of Federal Regulations, 2011 CFR
2011-07-01
... item of information used and the methods employed in statistical compilations. The principal title of... should be furnished: (i) Market research. (a) The following data and information shall be provided: (1) A clear and detailed description of the sample, observational, and data preparation designs, including...
Code of Federal Regulations, 2010 CFR
2010-07-01
... item of information used and the methods employed in statistical compilations. The principal title of... should be furnished: (i) Market research. (a) The following data and information shall be provided: (1) A clear and detailed description of the sample, observational, and data preparation designs, including...
Minefields Associated with Mining Data from Peer-reviewed Literature
The USEPA’s ECOTOX database is the largest compilation of ecotoxicity study results, providing information on the adverse effects of single chemical stressors to ecologically relevant aquatic and terrestrial species. The primary source of data included in the ECOTOX database is t...
Code of Federal Regulations, 2014 CFR
2014-10-01
... data or data compilations, stored in any medium from which information can be obtained either directly... documents as they are kept in the usual course of business or must organize and label them to correspond to...
FGC Webinar: Making it Easier to Buy Greener Products and Services, and Final FGC Data Reporting
Presention on changes in Sustainable Facility Tool, Green Product Compilation, and Environmental Programs pages that clarify requirements for purchasing products and services, as well as reporting your FY2017 FGC Data using Re-TRAC Connect.