Science.gov

Sample records for area database rogad

  1. User's guide to the Geothermal Resource Areas Database

    SciTech Connect

    Lawrence, J.D.; Leung, K.; Yen, W.

    1981-10-01

    The National Geothermal Information Resource project at the Lawrence Berkeley Laboratory is developing a Geothermal Resource Areas Database, called GRAD, designed to answer questions about the progress of geothermal energy development. This database will contain extensive information on geothermal energy resources for selected areas, covering development from initial exploratory surveys to plant construction and operation. The database is available for on-lie interactive query by anyone with an account number on the computer, a computer terminal with an acoustic coupler, and a telephone. This report will help in making use of the database. Some information is provided on obtaining access to the computer system being used, instructions on obtaining standard reports, and some aids to using the query language.

  2. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  3. Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints

    ERIC Educational Resources Information Center

    Philip, George C.

    2007-01-01

    This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…

  4. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  5. Geologic map database of the El Mirage Lake area, San Bernardino and Los Angeles Counties, California

    USGS Publications Warehouse

    Miller, David M.; Bedford, David R.

    2000-01-01

    This geologic map database for the El Mirage Lake area describes geologic materials for the dry lake, parts of the adjacent Shadow Mountains and Adobe Mountain, and much of the piedmont extending south from the lake upward toward the San Gabriel Mountains. This area lies within the western Mojave Desert of San Bernardino and Los Angeles Counties, southeastern California. The area is traversed by a few paved highways that service the community of El Mirage, and by numerous dirt roads that lead to outlying properties. An off-highway vehicle area established by the Bureau of Land Management encompasses the dry lake and much of the land north and east of the lake. The physiography of the area consists of the dry lake, flanking mud and sand flats and alluvial piedmonts, and a few sharp craggy mountains. This digital geologic map database, intended for use at 1:24,000-scale, describes and portrays the rock units and surficial deposits of the El Mirage Lake area. The map database was prepared to aid in a water-resource assessment of the area by providing surface geologic information with which deepergroundwater-bearing units may be understood. The area mapped covers the Shadow Mountains SE and parts of the Shadow Mountains, Adobe Mountain, and El Mirage 7.5-minute quadrangles. The map includes detailed geology of surface and bedrock deposits, which represent a significant update from previous bedrock geologic maps by Dibblee (1960) and Troxel and Gunderson (1970), and the surficial geologic map of Ponti and Burke (1980); it incorporates a fringe of the detailed bedrock mapping in the Shadow Mountains by Martin (1992). The map data were assembled as a digital database using ARC/INFO to enable wider applications than traditional paper-product geologic maps and to provide for efficient meshing with other digital data bases prepared by the U.S. Geological Survey's Southern California Areal Mapping Project.

  6. Geothermal resource areas database for monitoring the progress of development in the United States

    NASA Astrophysics Data System (ADS)

    Lawrence, J. D.; Lepman, S. R.; Leung, K. N.; Phillips, S. L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described as well as the structure of the database.

  7. Geothermal resource areas database for monitoring the progress of development in the United States

    SciTech Connect

    Lawrence, J.D.; Lepman, S.R.; Leung, K.; Phillips, S.L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described here. Appendices describe the structure of the database in detail.

  8. Analysis on the flood vulnerability in the Seoul and Busan metropolitan area, Korea using spatial database

    NASA Astrophysics Data System (ADS)

    Lee, Mung-Jin

    2015-04-01

    In the future, temperature rises and precipitation increases are expected from climate change due to global warming. Concentrated heavy rain, typhoons, flooding, and other weather phenomena bring hydrologic variations. In this study, the flood susceptibility of the Seoul and Busan metropolitan area was analyzed and validated using a GIS based on a frequency ratio model and a logistic regression model with training and validation datasets of the flooded area. The flooded area in 2010 was used to train the model, and the flooded area in 2011 was used to validate the model. Using data is that topographic, geological, and soil data from the study areas were collected, processed, and digitized for use in a GIS. Maps relevant to the specific capacity were assembled in a spatial database. Then, flood susceptibility maps were created. Finally, the flood susceptibility maps were validated using the flooded area in 2011, which was not used for training. To represent the flood susceptible areas, this study used the probability-frequency ratio. The frequency ratio is the probability of occurrence of a certain attribute. Logistic regression allows for investigation of multivariate regression relations between one dependent and several independent variables. Logistic regression has a limit in that the calculation process cannot be traced because it repeats calculations to find the optimized regression equation for determining the possibility that the dependent variable will occur. In case of Seoul, The frequency ratio and logistic regression model results showed 79.61% and 79.05% accuracy. And the case of Busan, logistic regression model results showed 82.30%. This information and the maps generated from it could be applied to flood prevention and management. In addition, the susceptibility map provides meaningful information for decision-makers regarding priority areas for implementing flood mitigation policies.

  9. Database of well and areal data, South San Francisco Bay and Peninsula area, California

    USGS Publications Warehouse

    Leighton, D.A.; Fio, J.L.; Metzger, L.F.

    1995-01-01

    A database was developed to organize and manage data compiled for a regional assessment of geohydrologic and water-quality conditions in the south San Francisco Bay and Peninsula area in California. Available data provided by local, State, and Federal agencies and private consultants was utilized in the assessment. The database consists of geographicinformation system data layers and related tables and American Standard Code for Information Interchange files. Documentation of the database is necessary to avoid misinterpretation of the data and to make users aware of potential errors and limitations. Most of the data compiled were collected from wells and boreholes (collectively referred to as wells in this report). This point-specific data, including construction, water-level, waterquality, pumping test, and lithologic data, are contained in tables and files that are related to a geographic information system data layer that contains the locations of the wells. There are 1,014 wells in the data layer and the related tables contain 35,845 water-level measurements (from 293 of the wells) and 9,292 water-quality samples (from 394 of the wells). Calculation of hydraulic heads and gradients from the water levels can be affected adversely by errors in the determination of the altitude of land surface at the well. Cation and anion balance computations performed on 396 of the water-quality samples indicate high cation and anion balance errors for 51 (13 percent) of the samples. Well drillers' reports were interpreted for 762 of the wells, and digital representations of the lithology of the formations are contained in files following the American Standard Code for Information Interchange. The usefulness of drillers' descriptions of the formation lithology is affected by the detail and thoroughness of the drillers' descriptions, as well as the knowledge, experience, and vocabulary of the individual who described the drill cuttings. Additional data layers were created that contain political, geohydrologic, and other geographic data. These layers contain features represented by areas and lines rather than discrete points. The layers consist of data representing the thickness of alluvium, surficial geology, physiographic subareas, watershed boundaries, land use, water-supply districts, wastewater treatment districts, and recharge basins. The layers manually digitizing paper maps, acquisition of data already in digital form, or creation of new layers from available layers. The scale of the source data affects the accurate representation of real-world features with the data layer, and, therefore, the scale of the source data must be considered when the data are analyzed and plotted.

  10. Spring Database for the Basin and Range Carbonate-Rock Aquifer System, White Pine County, Nevada, and Adjacent Areas in Nevada and Utah

    USGS Publications Warehouse

    Pavelko, Michael T.

    2007-01-01

    A database containing nearly 3,400 springs was developed for the Basin and Range carbonate-rock aquifer system study area in White Pine County, Nevada, and adjacent areas in Nevada and Utah. The spring database provides a foundation for field verification of springs in the study area. Attributes in the database include location, geographic and general geologic settings, and available discharge and temperature data for each spring.

  11. Photogrammetric and GIS techniques for the development of vegetation databases of mountainous areas: Great Smoky Mountains National Park

    NASA Astrophysics Data System (ADS)

    Welch, Roy; Madden, Marguerite; Jordan, Thomas

    Detailed vegetation databases and associated maps of the Great Smoky Mountains National Park, a rugged, forested area of more than 2000 km 2, were constructed to support resource management activities of the U.S. National Park Service (NPS). These detailed vegetation databases and associated maps have a terrain relief exceeding 1700 m and a continuous forest cover over 95% of the Park. The requirement to use 1:12,000- and 1:40,000-scale color infrared aerial photographs as the primary data source for mapping overstory and understory vegetation, respectively, necessitated the integration of analog photointerpretation with both digital softcopy photogrammetry and geographic information system (GIS) procedures to overcome problems associated with excessive terrain relief and a lack of ground control. Applications of the vegetation database and associated large-scale maps include assessments of vegetation patterns related to management activities and quantification of forest fire fuels.

  12. Database of groundwater levels and hydrograph descriptions for the Nevada Test Site area, Nye County, Nevada

    USGS Publications Warehouse

    Elliott, Peggy E.; Fenelon, Joseph M.

    2010-01-01

    Water levels in the database were quality assured and analyzed. Multiple conditions were assigned to each water‑level measurement to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed to each water-level measurement.

  13. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness

    PubMed Central

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D.; Hockings, Marc

    2015-01-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133

  14. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness.

    PubMed

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D; Hockings, Marc

    2015-11-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133

  15. The construction and periodicity analysis of natural disaster database of Alxa area based on Chinese local records

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Mingzhong, Tian; Hengli, Wang

    2010-05-01

    Chinese hand-written local records were originated from the first century. Generally, these local records include geography, evolution, customs, education, products, people, historical sites, as well as writings of an area. Through such endeavors, the information of the natural materials of China nearly has had no "dark ages" in the evolution of its 5000-year old civilization. A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia, the second largest desert in China, is used here for the construction of a 500-year high resolution database. The database is divided into subsets according to the types of natural-disasters like sand-dust storm, drought events, cold wave, etc. Through applying trend, correlation, wavelet, and spectral analysis on these data, we can estimate the statistically periodicity of different natural-disasters, detect and quantify similarities and patterns of the periodicities of these records, and finally take these results in aggregate to find a strong and coherent cyclicity through the last 500 years which serves as the driving mechanism of these geological hazards. Based on the periodicity obtained from the above analysis, the paper discusses the probability of forecasting natural-disasters and the suitable measures to reduce disaster losses through history records. Keyword: Chinese local records; Alxa; natural disasters; database; periodicity analysis

  16. Planting the SEED: Towards a Spatial Economic Ecological Database for a shared understanding of the Dutch Wadden area

    NASA Astrophysics Data System (ADS)

    Daams, Michiel N.; Sijtsma, Frans J.

    2013-09-01

    In this paper we address the characteristics of a publicly accessible Spatial Economic Ecological Database (SEED) and its ability to support a shared understanding among planners and experts of the economy and ecology of the Dutch Wadden area. Theoretical building blocks for a Wadden SEED are discussed. Our SEED contains a comprehensive set of stakeholder validated spatially explicit data on key economic and ecological indicators. These data extend over various spatial scales. Spatial issues relevant to the specification of a Wadden-SEED and its data needs are explored in this paper and illustrated using empirical data for the Dutch Wadden area. The purpose of the SEED is to integrate basic economic and ecologic information in order to support the resolution of specific (policy) questions and to facilitate connections between project level and strategic level in the spatial planning process. Although modest in its ambitions, we will argue that a Wadden SEED can serve as a valuable element in the much debated science-policy interface. A Wadden SEED is valuable since it is a consensus-based common knowledge base on the economy and ecology of an area rife with ecological-economic conflict, including conflict in which scientific information is often challenged and disputed.

  17. Geologic Map and Map Database of the Oakland Metropolitan Area, Alameda, Contra Costa, and San Francisco Counties, California

    USGS Publications Warehouse

    Graymer, R.W.

    2000-01-01

    Introduction This report contains a new geologic map at 1:50,000 scale, derived from a set of geologic map databases containing information at a resolution associated with 1:24,000 scale, and a new description of geologic map units and structural relationships in the mapped area. The map database represents the integration of previously published reports and new geologic mapping and field checking by the author (see Sources of Data index map on the map sheet or the Arc-Info coverage pi-so and the textfile pi-so.txt). The descriptive text (below) contains new ideas about the Hayward fault and other faults in the East Bay fault system, as well as new ideas about the geologic units and their relations. These new data are released in digital form in conjunction with the Federal Emergency Management Agency Project Impact in Oakland. The goal of Project Impact is to use geologic information in land-use and emergency services planning to reduce the losses occurring during earthquakes, landslides, and other hazardous geologic events. The USGS, California Division of Mines and Geology, FEMA, California Office of Emergency Services, and City of Oakland participated in the cooperative project. The geologic data in this report were provided in pre-release form to other Project Impact scientists, and served as one of the basic data layers for the analysis of hazard related to earthquake shaking, liquifaction, earthquake induced landsliding, and rainfall induced landsliding. The publication of these data provides an opportunity for regional planners, local, state, and federal agencies, teachers, consultants, and others outside Project Impact who are interested in geologic data to have the new data long before a traditional paper map could be published. Because the database contains information about both the bedrock and surficial deposits, it has practical applications in the study of groundwater and engineering of hillside materials, as well as the study of geologic hazards and the academic research on the geologic history and development of the region.

  18. Cortical thinning in cognitively normal elderly cohort of 60 to 89 year old from AIBL database and vulnerable brain areas

    NASA Astrophysics Data System (ADS)

    Lin, Zhongmin S.; Avinash, Gopal; Yan, Litao; McMillan, Kathryn

    2014-03-01

    Age-related cortical thinning has been studied by many researchers using quantitative MR images for the past three decades and vastly differing results have been reported. Although results have shown age-related cortical thickening in elderly cohort statistically in some brain regions under certain conditions, cortical thinning in elderly cohort requires further systematic investigation. This paper leverages our previously reported brain surface intensity model (BSIM)1 based technique to measure cortical thickness to study cortical changes due to normal aging. We measured cortical thickness of cognitively normal persons from 60 to 89 years old using Australian Imaging Biomarkers and Lifestyle Study (AIBL) data. MRI brains of 56 healthy people including 29 women and 27 men were selected. We measured average cortical thickness of each individual in eight brain regions: parietal, frontal, temporal, occipital, visual, sensory motor, medial frontal and medial parietal. Unlike the previous published studies, our results showed consistent age-related thinning of cerebral cortex in all brain regions. The parietal, medial frontal and medial parietal showed fastest thinning rates of 0.14, 0.12 and 0.10 mm/decade respectively while the visual region showed the slowest thinning rate of 0.05 mm/decade. In sensorimotor and parietal areas, women showed higher thinning (0.09 and 0.16 mm/decade) than men while in all other regions men showed higher thinning than women. We also created high resolution cortical thinning rate maps of the cohort and compared them to typical patterns of PET metabolic reduction of moderate AD and frontotemporal dementia (FTD). The results seemed to indicate vulnerable areas of cortical deterioration that may lead to brain dementia. These results validate our cortical thickness measurement technique by demonstrating the consistency of the cortical thinning and prediction of cortical deterioration trend with AIBL database.

  19. Digital database architecture and delineation methodology for deriving drainage basins, and a comparison of digitally and non-digitally derived numeric drainage areas

    USGS Publications Warehouse

    Dupree, Jean A.; Crowfoot, Richard M.

    2012-01-01

    The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)

  20. Esophageal eosinophilia is increased in rural areas with low population density: Results from a national pathology database

    PubMed Central

    Jensen, Elizabeth T.; Hoffman, Kate; Shaheen, Nicholas J.; Genta, Robert M.; Dellon, Evan S.

    2015-01-01

    Objectives Eosinophilic esophagitis (EoE) is an increasingly prevalent chronic disease arising from an allergy/immune-mediated process. Generally, the risk of atopic disease differs in rural and urban environments. The relationship between population density and EoE is unknown. Our aim was to assess the relationship between EoE and population density. Methods : We conducted a cross-sectional, case-control study of patients with esophageal biopsies in a U.S. national pathology database between January 2009 and June 2012 to assess the relationship between population density and EoE. Using Geographic Information Systems (GIS), the population density (individuals/mile2) was determined for each patient zip code. The odds of esophageal eosinophilia and EoE were estimated for each quintile of population density and adjusted for potential confounders. Sensitivity analyses were conducted with varying case definitions and to evaluate the potential for bias from endoscopy volume and patient factors. Results Of 292,621 unique patients in the source population, 89,754 had normal esophageal biopsies and 14,381 had esophageal eosinophilia with ≥15 eosinophils per high-power field (eos/hpf). The odds of esophageal eosinophilia increased with decreasing population density (p for trend < 0.001). Compared to those in the highest quintile of population density, odds of esophageal eosinophilia were significantly higher amongst those in the lowest quintile of population density (aOR 1.27, 95% CI: 1.18, 1.36). A similar dose-response trend was observed across case definitions with odds of EoE increased in the lowest population density quintile (aOR 1.59, 95% CI: 1.45-1.76). Estimates were robust to sensitivity analyses. Conclusions Population density is strongly and inversely associated with esophageal eosinophilia and EoE. This association is robust to varying case definitions and adjustment factors. Environmental exposures more prominent in rural areas may be relevant to the pathogenesis of EoE. PMID:24667575

  1. Earthquake Mechanisms of the Mediterranean Area (EMMA) Database 3.0: First-Motion Focal Mechanisms and Their Ability to Characterize the Tectonic Deformation Style

    NASA Astrophysics Data System (ADS)

    Vannucci, G.; Gasperini, P.

    2006-12-01

    We present a new version (3.0) of the database of Earthquake Mechanisms of the Mediterranean Area (EMMA) of "checked" first-motion focal solutions. The database, developed on MS-ACCESS, uniforms the different formats and notations of the data available in the literature and try to solve misprints, inaccuracies and inconsistencies that make them almost useless for other users (e.g. tests the perpendicularity of nodal planes and/or P and T axes of all solutions and, when both axes and planes are given, even their mutual consistency). An automatic procedure, based on several criteria, permits to choose the most "representative" (best) solution when more than one is available for the same earthquake. The database allows to make selections on the earthquake data and to export data files suitable to be handled by graphic software and user written procedures. For the Mediterranean region the first-motion focal mechanisms available from the literature allows to extend back in time, and to a lower magnitude threshold the data coverage of Centroid Moment Tensor (CMT) focal solutions of available Catalogs (Harvard University, Istituto Nazionale di Geofisica e Vulcanologia, Eidgenssische Technische Hochschule, Instituto Andaluz de Geofisica, USGS). With respect to the previous available version (2.1) we improve (+20%) the number of the data (about 7700 focal solutions at present), we add geographic information to the display of the focal solution plot, we permits to display the best solution and the other ones discarded (i.e. non-best). To solve some bias and inconsistence of the collected original data we also add to each mechanism the hypocentral parameters and the magnitude taken from the International Seismological Centre (ISC) Catalog. We verify the ability of EMMA database to characterize the tectonic deformation style, computing the cumulative moment tensor in the Mediterranean area on a regular grid with different seismogenic thickness, using the EMMA and CMT data separately. Then we use the rotational angle that should be applied to a cumulative focal mechanism to make it coincide with another one to verify the main differences between these patterns and the ability of EMMA database to reproduce the CMT Catalogs. To verify the quality of EMMA database we take advantage from some recent analyses that evidenced a relation between the Gutenberg- Richter b-value and the tectonic style of seismic release (in particular extensional mechanisms would correspond to higher b-values than compressive ones). We correlate the tectonic style as deduced from the cumulative moment tensor, previously detected, with the b-value computed by the ISC Catalog. We verify a good correlation between b-value and tectonic style using focal mechanisms taken from EMMA database.

  2. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    SciTech Connect

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W.; Senkpeil, Ryan R.; Tlatov, Andrey G.; Nagovitsyn, Yury A.; Pevtsov, Alexei A.; Chapman, Gary A.; Cookson, Angela M.; Yeates, Anthony R.; Watson, Fraser T.; Balmaceda, Laura A.; DeLuca, Edward E.; Martens, Petrus C. H.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  3. Hydrology of lakes in the Minneapolis-Saint Paul Metropolitan Area : a summary of available data stored - using a data-base management system

    USGS Publications Warehouse

    McBride, Mark S.

    1976-01-01

    Data were collected and summarized on the hydrology and hydrogeology of 949 lakes, 10 acres or larger, in the Minneapolis-St. Paul metropolitan area, Minnesota. Eight tables totaling over 100 pages present data on location, depth, area, lake level, ecological and game-management classification, inflowing and outflowing streams, soils, bedrock type, water added to or taken from lake, and reported lake-related problems. SYSTEM 2000, a generalized computer data-base management system, was used to organize the data and prepare the tables. SYSTEM 2000 provides powerful capabilities for future retrieval and analysis of the data. The data base is available to potential users so that questions not implicitly anticipated in the preparation of the published tables can be answered readily, and the user can retrieve data in tabular or other forms to meet his particular needs. (Woodard-USGS)

  4. Analytical results, database management and quality assurance for analysis of soil and groundwater samples collected by cone penetrometer from the F and H Area seepage basins

    SciTech Connect

    Boltz, D.R.; Johnson, W.H.; Serkiz, S.M.

    1994-10-01

    The Quantification of Soil Source Terms and Determination of the Geochemistry Controlling Distribution Coefficients (K{sub d} values) of Contaminants at the F- and H-Area Seepage Basins (FHSB) study was designed to generate site-specific contaminant transport factors for contaminated groundwater downgradient of the Basins. The experimental approach employed in this study was to collect soil and its associated porewater from contaminated areas downgradient of the FHSB. Samples were collected over a wide range of geochemical conditions (e.g., pH, conductivity, and contaminant concentration) and were used to describe the partitioning of contaminants between the aqueous phase and soil surfaces at the site. The partitioning behavior may be used to develop site-specific transport factors. This report summarizes the analytical procedures and results for both soil and porewater samples collected as part of this study and the database management of these data.

  5. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  6. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial

  7. 75 FR 61553 - National Transit Database: Amendments to the Urbanized Area Annual Reporting Manual and to the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-05

    ... Formula Grants to provide an annual report to the Secretary of Transportation via the NTD reporting system according to a uniform system of accounts (USOA). Other transit systems in urbanized areas report to the NTD... system and be responsive to the needs of the transit systems reporting to the NTD, FTA annually...

  8. Map and map database of susceptibility to slope failure by sliding and earthflow in the Oakland area, California

    USGS Publications Warehouse

    Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven

    2001-01-01

    Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area where the three basic constituents, a geologic map, a landslide inventory, and a slope map, are available in digital form. Added predictive power of the new susceptibility model may reside in attributes that remain to be explored?among them seismic shaking, distance to nearest road, and terrain elevation, aspect, relief, and curvature.

  9. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A. G.; Tweedie, C. E.

    2013-12-01

    In 2013, the Barrow Area Information Database (BAID, www.baid.utep.edu) project resumed field operations in Barrow, AK. The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 11,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Highlights for the 2013 season include the addition of more than 2000 additional research sites, providing differential global position system (dGPS) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal, deployment of a wireless sensor network, and substantial upgrades to the BAID website and web mapping applications.

  10. A spatial database of bedding attitudes to accompany Geologic map of the greater Denver area, Front Range Urban Corridor, Colorado

    USGS Publications Warehouse

    Trimble, Donald E.; Machette, Michael N.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude symbols display over the geographic extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette 1973-1977 and published in 1979 (U.S. Geological Survey Map I-856-H) under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999), was digitized under the USGS Front Range Infrastructure Resources Project (see cross-reference). In general, the mountainous areas in the west part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.

  11. A spatial database of bedding attitudes to accompany Geologic Map of Boulder-Fort Collins-Greeley Area, Colorado

    USGS Publications Warehouse

    Colton, Roger B.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude data displayed over the geographic extent of rock stratigraphic units (formations) as compiled by Colton in 1976 (U.S.Geological Survey Map I-855-G) under the Front Range Urban Corridor Geology Program. Colton used his own mapping and published geologic maps having varied map unit schemes to compile one map with a uniform classification of geologic units. The resulting published color paper map was intended for planning for use of land in the Front Range Urban Corridor. In 1997-1999, under the USGS Front Range Infrastructure Resources Project, Colton's map was digitized to provide data at 1:100,000 scale to address urban growth issues(see cross-reference). In general, the west part of the map shows a variety of Precambrian igneous and metamorphic rocks, major faults and brecciated zones along an eastern strip (5-20 km wide) of the Front Range. The eastern and central part of the map (Colorado Piedmont) depicts a mantle of Quaternary unconsolidated deposits and interspersed Cretaceous or Tertiary-Cretaceous sedimentary rock outcrops. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone and shale formations (and sparse limestone) form hogbacks, intervening valleys, and in range-front folds, anticlines, and fault blocks. Localized dikes and sills of Tertiary rhyodacite and basalt intrude rocks near the range front, mostly in the Boulder area.

  12. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska.

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Kofoed, K. B.; Copenhaver, W.; Laney, C. M.; Gaylord, A. G.; Collins, J. A.; Tweedie, C. E.

    2014-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Recent advances include the addition of more than 2000 new research sites, provision of differential global position system (dGPS) and Unmanned Aerial Vehicle (UAV) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal to better make use of science in local decision making, deployment and near real time connectivity to a wireless micrometeorological sensor network, links to Barrow area datasets housed at national data archives and substantial upgrades to the BAID website and web mapping applications.

  13. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  14. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the

  15. Electronic Databases.

    ERIC Educational Resources Information Center

    Williams, Martha E.

    1985-01-01

    Presents examples of bibliographic, full-text, and numeric databases. Also discusses how to access these databases online, aids to online retrieval, and several issues and trends (including copyright and downloading, transborder data flow, use of optical disc/videodisc technology, and changing roles in database generation and processing). (JN)

  16. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  17. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A.; Brown, J.; Tweedie, C. E.

    2012-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. The Barrow Area Information Database (BAID, www.baidims.org) is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 9,600 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. BAID has been used to: Optimize research site choice; Reduce duplication of science effort; Discover complementary and potentially detrimental research activities in an area of scientific interest; Re-establish historical research sites for resampling efforts assessing change in ecosystem structure and function over time; Exchange knowledge across disciplines and generations; Facilitate communication between western science and traditional ecological knowledge; Provide local residents access to science data that facilitates adaptation to arctic change; (and) Educate the next generation of environmental and computer scientists. This poster describes key activities that will be undertaken over the next three years to provide BAID users with novel software tools to interact with a current and diverse selection of information and data about the Barrow area. Key activities include: 1. Collecting data on research activities, generating geospatial data, and providing mapping support. 2. Maintaining, updating and innovating the existing suite of BAID geobrowsers. 3. Maintaining and updating aging server hardware supporting BAID. 4. Adding interoperability with other CI using workflows, controlled vocabularies and web services. 5. Linking BAID to data archives at the National Snow and Ice Data Center (NSIDC). 6. Developing a wireless sensor network that provides web based interaction with near-real time climate and other data. 7. Training next generation of environmental and computer scientists and conducting outreach.

  18. Analysis of expressed sequence tags from Actinidia: applications of a cross species EST database for gene discovery in the areas of flavor, health, color and ripening

    PubMed Central

    Crowhurst, Ross N; Gleave, Andrew P; MacRae, Elspeth A; Ampomah-Dwamena, Charles; Atkinson, Ross G; Beuning, Lesley L; Bulley, Sean M; Chagne, David; Marsh, Ken B; Matich, Adam J; Montefiori, Mirco; Newcomb, Richard D; Schaffer, Robert J; Usadel, Björn; Allan, Andrew C; Boldingh, Helen L; Bowen, Judith H; Davy, Marcus W; Eckloff, Rheinhart; Ferguson, A Ross; Fraser, Lena G; Gera, Emma; Hellens, Roger P; Janssen, Bart J; Klages, Karin; Lo, Kim R; MacDiarmid, Robin M; Nain, Bhawana; McNeilage, Mark A; Rassam, Maysoon; Richardson, Annette C; Rikkerink, Erik HA; Ross, Gavin S; Schröder, Roswitha; Snowden, Kimberley C; Souleyre, Edwige JF; Templeton, Matt D; Walton, Eric F; Wang, Daisy; Wang, Mindy Y; Wang, Yanming Y; Wood, Marion; Wu, Rongmei; Yauk, Yar-Khing; Laing, William A

    2008-01-01

    Background Kiwifruit (Actinidia spp.) are a relatively new, but economically important crop grown in many different parts of the world. Commercial success is driven by the development of new cultivars with novel consumer traits including flavor, appearance, healthful components and convenience. To increase our understanding of the genetic diversity and gene-based control of these key traits in Actinidia, we have produced a collection of 132,577 expressed sequence tags (ESTs). Results The ESTs were derived mainly from four Actinidia species (A. chinensis, A. deliciosa, A. arguta and A. eriantha) and fell into 41,858 non redundant clusters (18,070 tentative consensus sequences and 23,788 EST singletons). Analysis of flavor and fragrance-related gene families (acyltransferases and carboxylesterases) and pathways (terpenoid biosynthesis) is presented in comparison with a chemical analysis of the compounds present in Actinidia including esters, acids, alcohols and terpenes. ESTs are identified for most genes in color pathways controlling chlorophyll degradation and carotenoid biosynthesis. In the health area, data are presented on the ESTs involved in ascorbic acid and quinic acid biosynthesis showing not only that genes for many of the steps in these pathways are represented in the database, but that genes encoding some critical steps are absent. In the convenience area, genes related to different stages of fruit softening are identified. Conclusion This large EST resource will allow researchers to undertake the tremendous challenge of understanding the molecular basis of genetic diversity in the Actinidia genus as well as provide an EST resource for comparative fruit genomics. The various bioinformatics analyses we have undertaken demonstrates the extent of coverage of ESTs for genes encoding different biochemical pathways in Actinidia. PMID:18655731

  19. Database Manager

    ERIC Educational Resources Information Center

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  20. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  1. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  2. BIOMARKERS DATABASE

    EPA Science Inventory

    This database was developed by assembling and evaluating the literature relevant to human biomarkers. It catalogues and evaluates the usefulness of biomarkers of exposure, susceptibility and effect which may be relevant for a longitudinal cohort study. In addition to describing ...

  3. Database filters

    SciTech Connect

    Pramanik, S.

    1982-01-01

    Several hardware database-searchers for a large number of patterns or keys are presented. These searchers can be implemented by a random access memory and are suitable for VLSI implementation. Application of these searchers as database filters is described; a filter detects all the matched records in the database, as well as a few others. The percentage of unmatched records can be reduced to any arbitrary minimum value by using several filters together, or passing the output records repeatedly through the same filters. The performance of the filters using the iterative approach depends very much on the regrouping algorithms of the patterns/keys. Several such algorithms are presented and their performances compared. A single pass is required if they are pipelined. Hardware organisation for different pipelined approaches are also studied. Experiments are performed for all the different hardware organisations mentioned above on an employee-name database. 25 references.

  4. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  5. GIS for the Gulf: A reference database for hurricane-affected areas: Chapter 4C in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Greenlee, Dave

    2007-01-01

    A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.

  6. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases

  7. Physical database support for scientific and statistical database management

    SciTech Connect

    Olken, F.

    1986-05-01

    Various physical database techniques that can be used to implement scientific and statistical database management systems are surveyed. Techniques for storing the data, and algorithms for query processing are considered. File structures, access methods, compression methods, buffering strategies, and algorithms for aggregation, transposition, and sampling are discussed. Areas for future research are mentioned. 75 refs. (DWL)

  8. Alternative Databases for Anthropology Searching.

    ERIC Educational Resources Information Center

    Brody, Fern; Lambert, Maureen

    1984-01-01

    Examines online search results of sample questions in several databases covering linguistics, cultural anthropology, and physical anthropology in order to determine if and where any overlap in results might occur, and which files have greatest number of relevant hits. Search results by database are given for each subject area. (EJS)

  9. Biological Databases for Behavioral Neurobiology

    PubMed Central

    Baker, Erich J.

    2014-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the various needs of behavioral neuroscience analysis and interpretation. The focus is on the classes of databases, data curation, and advanced applications in bioinformatics using examples largely drawn from research efforts in behavioral neuroscience. PMID:23195119

  10. Medical database security evaluation.

    PubMed

    Pangalos, G J

    1993-01-01

    Users of medical information systems need confidence in the security of the system they are using. They also need a method to evaluate and compare its security capabilities. Every system has its own requirements for maintaining confidentiality, integrity and availability. In order to meet these requirements a number of security functions must be specified covering areas such as access control, auditing, error recovery, etc. Appropriate confidence in these functions is also required. The 'trust' in trusted computer systems rests on their ability to prove that their secure mechanisms work as advertised and cannot be disabled or diverted. The general framework and requirements for medical database security and a number of parameters of the evaluation problem are presented and discussed. The problem of database security evaluation is then discussed, and a number of specific proposals are presented, based on a number of existing medical database security systems. PMID:8072337

  11. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  12. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  13. The CEBAF Element Database

    SciTech Connect

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  14. Database Marketplace 2002: The Database Universe.

    ERIC Educational Resources Information Center

    Tenopir, Carol; Baker, Gayle; Robinson, William

    2002-01-01

    Reviews the database industry over the past year, including new companies and services, company closures, popular database formats, popular access methods, and changes in existing products and services. Lists 33 firms and their database services; 33 firms and their database products; and 61 company profiles. (LRW)

  15. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data treatment could be conducted in other programs after extraction the filtered data into *.csv file. It makes the database understandable for non-experts. The database employs open data format (*.csv) and wide spread tools: PHP as the program language, MySQL as database management system, JavaScript for interaction with GoogleMaps and JQueryUI for create user interface. The database is multilingual: there are association tables, which connect with elements of the database. In total the development required about 150 hours. The database still has several problems. The main problem is the reliability of the data. Actually it needs an expert system for estimation the reliability, but the elaboration of such a system would take more resources than the database itself. The second problem is the problem of stream selection - how to select the stations that are connected with each other (for example, belong to one water stream) and indicate their sequence. Currently the interface is English and Russian. However it can be easily translated to your language. But some problems we decided. For example problem "the problem of the same station" (sometimes the distance between stations is smaller, than the error of position): when you adding new station to the database our application automatically find station near this place. Also we decided problem of object and parameter type (how to regard "EC" and "electrical conductivity" as the same parameter). This problem has been solved using "associative tables". If you would like to see the interface on your language, just contact us. We should send you the list of terms and phrases for translation on your language. The main advantage of the database is that it is totally open: everybody can see, extract the data from the database and use them for non-commercial purposes with no charge. Registered users can contribute to the database without getting paid. We hope, that it will be widely used first of all for education purposes, but professional scientists could use it also.

  16. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems. PMID:15458163

  17. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  18. Overlap in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Hood, William W.; Wilson, Concepcion S.

    2003-01-01

    Examines the topic of Fuzzy Set Theory to determine the overlap of coverage in bibliographic databases. Highlights include examples of comparisons of database coverage; frequency distribution of the degree of overlap; records with maximum overlap; records unique to one database; intra-database duplicates; and overlap in the top ten databases.…

  19. Database Access Systems.

    ERIC Educational Resources Information Center

    Dalrymple, Prudence W.; Roderer, Nancy K.

    1994-01-01

    Highlights the changes that have occurred from 1987-93 in database access systems. Topics addressed include types of databases, including CD-ROMs; enduser interface; database selection; database access management, including library instruction and use of primary literature; economic issues; database users; the search process; and improving…

  20. Global Cropland Area Database (GCAD) derived from Remote Sensing in Support of Food Security in the Twenty-first Century: Current Achievements and Future Possibilities

    USGS Publications Warehouse

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Giri, Chandra; Milesi, Cristina; Ozdogan, Mutlu; Congalton, Russ; Tilton, James; Sankey, Temuulen Tsagaan; Massey, Richard; Phalke, Aparna; Yadav, Kamini

    2015-01-01

    The precise estimation of the global agricultural cropland- extents, areas, geographic locations, crop types, cropping intensities, and their watering methods (irrigated or rainfed; type of irrigation) provides a critical scientific basis for the development of water and food security policies (Thenkabail et al., 2012, 2011, 2010). By year 2100, the global human population is expected to grow to 10.4 billion under median fertility variants or higher under constant or higher fertility variants (Table 1) with over three quarters living in developing countries, in regions that already lack the capacity to produce enough food. With current agricultural practices, the increased demand for food and nutrition would require in about 2 billion hectares of additional cropland, about twice the equivalent to the land area of the United States, and lead to significant increases in greenhouse gas productions (Tillman et al., 2011). For example, during 1960-2010 world population more than doubled from 3 billion to 7 billion. The nutritional demand of the population also grew swiftly during this period from an average of about 2000 calories per day per person in 1960 to nearly 3000 calories per day per person in 2010. The food demand of increased population along with increased nutritional demand during this period (1960-2010) was met by the “green revolution” which more than tripled the food production; even though croplands decreased from about 0.43 ha/capita to 0.26 ha/capita (FAO, 2009). The increase in food production during the green revolution was the result of factors such as: (a) expansion in irrigated areas which increased from 130 Mha in 1960s to 278.4 Mha in year 2000 (Siebert et al., 2006) or 399 Mha when you do not consider cropping intensity (Thenkabail et al., 2009a, 2009b, 2009c) or 467 Mha when you consider cropping intensity (Thenkabail et al., 2009a; Thenkabail et al., 2009c); (b) increase in yield and per capita food production (e.g., cereal production from 280 kg/person to 380 kg/person and meat from 22 kg/person to 34 kg/person (McIntyre, 2008); (c) new cultivar types (e.g., hybrid varieties of wheat and rice, biotechnology); and (d) modern agronomic and crop management practices (e.g., fertilizers, herbicide, pesticide applications). However, some of the factors that lead to the green revolution have stressed the environment to limits leading to salinization and decreasing water quality. For example, from 1960 to 2000, the phosphorous use doubled from 10 million tons to 20 MT, pesticide use tripled from near zero to 3 MT, and nitrogen use as fertilizer increased to a staggering 80 MT from just 10 MT (Foley et al., 2007; Khan and Hanjra, 2008). Further, diversion of croplands to bio-fuels is already taking water away from food production; the economics, carbon sequestration, environmental, and food security impacts of biofuel production are net negative (Lal and Pimentel, 2009), leaving us with a carbon debt (Gibbs et al., 2008; Searchinger et al., 2008). Climate models predict that in most regions of the world the hottest seasons on record will become the norm by the end of the century-an outcome that bodes ill for feeding the world (Kumar and Singh, 2005). Also, crop yield increases of the green revolution era have now stagnated (Hossain et al., 2005). Thereby, further increase in food production through increase in cropland areas and\\or increased allocations of water for croplands are widely considered unsustainable and\\or infeasible. Indeed, cropland areas have even begun to decrease in many 3 parts of the World due to factors such as urbanization, industrialization, and salinization. Furthermore, ecological and environmental imperatives such as biodiversity conservation and atmospheric carbon sequestration have put a cap on the possible expansion of cropland areas to other lands such as forests and rangelands. Other important factors limit food security. These include factors such as diversion of croplands to biofuels (Bindraban et al., 2009), limited water resources for irrigation expansion (Turral et al., 2009), limits on agricultural intensifications, loss of croplands to urbanization (Khan and Hanjra, 2008), increasing meat consumption (and associated demands on land and water) (Vinnari and Tapio, 2009), environmental infeasibility for cropland expansion (Gordon et al., 2009), and changing climate have all put pressure on our continued ability to sustain global food security in the twenty-first century. So, how does the World continue to meet its food and nutrition needs?. Solutions may come from bio-technology and precision farming, however developments in these fields are not currently moving at rates that will ensure global food security over next few decades. Further, there is a need for careful consideration of possible harmful effects of bio-technology. We should not be looking back 30– 50 years from now, like we have been looking back now at many mistakes made during the green revolution. During the green revolution the focus was only on getting more yield per unit area. Little thought was put about serious damage done to our natural environments, water resources, and human health as a result of detrimental factors such as uncontrolled use of herbicides-pesticides-nutrients, drastic groundwater mining, and salinization of fertile soils due to over irrigation. Currently, there is talk of a “second green revolution” or even an “ever green revolution”, but clear ideas on what these terms actually mean are still debated and are evolving. One of the biggest issues that are not given adequate focus is the use of large quantities of water for food production. Indeed, an overwhelming proportion (60-90%) of all human water use in India goes for producing their food (Falkenmark, M., & Rockström, 2006). But such intensive water use for food production is no longer tenable due to increasing pressure for water use alternatives such as increasing urbanization, industrialization, environmental flows, bio-fuels, and recreation. This has brought into sharp focus the need to grow more food per drop of water leading to a “blue revolution”

  1. Secure medical databases: design and operation.

    PubMed

    Pangalos, G J

    1996-10-01

    Medical database security plays an important role in the overall security of medical information systems. The development of appropriate secure database design and operation methodologies is an important problem in the area and a necessary prerequisite for the successful development of such systems. The general framework for medical database security and a number of parameters of the secure medical database design and operation problem are presented and discussed. A secure medical database development methodology is also presented which could help overcome some of the problems currently encountered. PMID:8960922

  2. Development of a system which automatically acquires optimal discrete-valued attributes by dividing and grouping continuous-valued attributes to assist clinical decision making in radiotherapy.

    PubMed

    Kou, Hiroko; Harauchi, Hajime; Numasaki, Hodaka; Kumazaki, Yu; Okura, Yasuhiro; Takemura, Akihiro; Kondou, Takashi; Ishibashi, Masatoshi; Hidaka, Kuniyuki; Umeda, Tokuo; Haneda, Kiyofumi; Inamura, Kiyonari

    2003-01-01

    The purposes of this study were first to develop a system which statistically tests results of radiotherapy and which automatically acquires an optimal discrete-valued attribute by dividing and grouping continuous-valued attributes, and second to find the optimal range of values such as tumor dose by taking account of the conditions and statistics in ROGAD (Radiation Oncology Greater Area Database), a multi-institutional database in Japan. Our ultimate goal is to assist clinical decision making for every patient. In this research, two algorithms for acquiring a boundary value were developed without detecting false boundaries or accidental errors of acquired boundaries. The resolution of detected discrete-valued attributes and speed of convergence were confirmed to be practical. The optimal range of given tumor dose with the best reaction and with the fewest complications is expected to be clarified. PMID:14617847

  3. The 2012 Nucleic Acids Research Database Issue and the online Molecular Biology Database Collection.

    PubMed

    Galperin, Michael Y; Fernández-Suárez, Xosé M

    2012-01-01

    The 19th annual Database Issue of Nucleic Acids Research features descriptions of 92 new online databases covering various areas of molecular biology and 100 papers describing recent updates to the databases previously described in NAR and other journals. The highlights of this issue include, among others, a description of neXtProt, a knowledgebase on human proteins; a detailed explanation of the principles behind the NCBI Taxonomy Database; NCBI and EBI papers on the recently launched BioSample databases that store sample information for a variety of database resources; descriptions of the recent developments in the Gene Ontology and UniProt Gene Ontology Annotation projects; updates on Pfam, SMART and InterPro domain databases; update papers on KEGG and TAIR, two universally acclaimed databases that face an uncertain future; and a separate section with 10 wiki-based databases, introduced in an accompanying editorial. The NAR online Molecular Biology Database Collection, available at http://www.oxfordjournals.org/nar/database/a/, has been updated and now lists 1380 databases. Brief machine-readable descriptions of the databases featured in this issue, according to the BioDBcore standards, will be provided at the http://biosharing.org/biodbcore web site. The full content of the Database Issue is freely available online on the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). PMID:22144685

  4. Online Databases in Physics.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; Verbeck, Alison F.

    1984-01-01

    This overview of 47 online sources for physics information available in the United States--including sub-field databases, transdisciplinary databases, and multidisciplinary databases-- notes content, print source, language, time coverage, and databank. Two discipline-specific databases (SPIN and PHYSICS BRIEFS) are also discussed. (EJS)

  5. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  6. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions

  7. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  8. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  9. Database transfers between several systems. [FRAMIS and DATATRIEVE

    SciTech Connect

    Doll, M.

    1983-01-01

    The ability to transfer databases between systems allows the user to exploit the best features of either system. This paper addresses beginning Datatrieve users and deals with the issues involved in a transfer of a database from a central computing area to a PDP-11 at the Los Alamos National Laboratory. FRAMIS was used to clean the original database; DATATRIEVE was used to establish the new database. The new database residing on the PDP-11 was subject to structural change at any time.

  10. An Evaluation of Online Business Databases.

    ERIC Educational Resources Information Center

    van der Heyde, Angela J.

    The purpose of this study was to evaluate the credibility and timeliness of online business databases. The areas of evaluation were the currency, reliability, and extent of financial information in the databases. These were measured by performing an online search for financial information on five U.S. companies. The method of selection for the…

  11. Database Software for the 1990s.

    ERIC Educational Resources Information Center

    Beiser, Karl

    1990-01-01

    Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and

  12. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  13. Developing Database Files for Student Use.

    ERIC Educational Resources Information Center

    Warner, Michael

    1988-01-01

    Presents guidelines for creating student database files that supplement classroom teaching. Highlights include determining educational objectives, planning the database with computer specialists and subject area specialists, data entry, and creating student worksheets. Specific examples concerning elements of the periodic table and…

  14. The Status of Statewide Subscription Databases

    ERIC Educational Resources Information Center

    Krueger, Karla S.

    2012-01-01

    This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…

  15. Database Software for the 1990s.

    ERIC Educational Resources Information Center

    Beiser, Karl

    1990-01-01

    Examines trends in the design of database management systems for microcomputers and predicts developments that may occur in the next decade. Possible developments are discussed in the areas of user interfaces, database programing, library systems, the use of MARC data, CD-ROM applications, artificial intelligence features, HyperCard, and…

  16. UGTA Photograph Database

    SciTech Connect

    NSTec Environmental Restoration

    2009-04-20

    One of the advantages of the Nevada Test Site (NTS) is that most of the geologic and hydrologic features such as hydrogeologic units (HGUs), hydrostratigraphic units (HSUs), and faults, which are important aspects of flow and transport modeling, are exposed at the surface somewhere in the vicinity of the NTS and thus are available for direct observation. However, due to access restrictions and the remote locations of many of the features, most Underground Test Area (UGTA) participants cannot observe these features directly in the field. Fortunately, National Security Technologies, LLC, geologists and their predecessors have photographed many of these features through the years. During fiscal year 2009, work was done to develop an online photograph database for use by the UGTA community. Photographs were organized, compiled, and imported into Adobe® Photoshop® Elements 7. The photographs were then assigned keyword tags such as alteration type, HGU, HSU, location, rock feature, rock type, and stratigraphic unit. Some fully tagged photographs were then selected and uploaded to the UGTA website. This online photograph database provides easy access for all UGTA participants and can help “ground truth” their analytical and modeling tasks. It also provides new participants a resource to more quickly learn the geology and hydrogeology of the NTS.

  17. Medical database security policies.

    PubMed

    Pangalos, G J

    1993-11-01

    Database security plays an important role in the overall security of medical information systems. Security does not only involve fundamental ethical principles such as privacy and confidentiality, but is also an essential prerequisite for effective medical care. The general framework and the requirements for medical database security are presented. The three prominent proposals for medical database security are discussed in some detail, together with specific proposals for medical database security. A number of parameters for a secure medical database development are presented and discussed, and guidelines are given for the development of secure medical database systems. PMID:8295541

  18. Petrophysical database of Uganda

    NASA Astrophysics Data System (ADS)

    Ruotoistenmäki, Tapio; Birungi, Nelson R.

    2015-06-01

    The petrophysical database of Uganda contains data on ca. 5800 rock samples collected and analyzed during 2009-2012 in international geological and geophysical projects covering the main part of the land area of Uganda. The parameters included are the susceptibilities and densities of all available field samples. Susceptibilities were measured from the samples from three directions. Using these parameters, we also calculated the ratios of susceptibility maxima/minima reflecting direction homogeneity of magnetic minerals, and estimated the iron content of paramagnetic samples and the magnetite content of ferrimagnetic samples. Statistical and visual analysis of the petrophysical data of Uganda demonstrated their wide variation, thus emphasizing their importance in analyzing the bedrock variations in three dimensions. Using the density-susceptibility diagram, the data can be classified into six main groups: 1. A low density and susceptibility group, consisting of sedimentary and altered rocks. 2. Low-susceptibility, felsic rocks (e.g. quartzites and metasandstones). 3. Paramagnetic, felsic rocks (e.g. granites). 4. Ferrimagnetic, magnetite-containing felsic rocks (e.g. granites). 5. Paramagnetic mafic rocks (e.g. amphibolites and dolerites). 6. Ferrimagnetic, mafic rocks containing magnetite and high-density mafic minerals (mainly dolerites). Moreover, analysis revealed that the parameter distributions of even a single rock type (e.g. granites) can be very variable, forming separate clusters. This demonstrates that the simple calculation of density or susceptibility averages of rock types can be highly erratic. For example, the average can lie between two groups, where only few, if any, samples exist. Therefore, estimation of the representative density and susceptibility must be visually verified from these diagrams. The areal distribution of parameters and their calculated derivatives generally correlate well with the regional distribution of lithological and geophysical blocks. However, there are also several areas where, for instance, the low susceptibility of samples correlates poorly with high magnetic airborne anomaly data. This refers to high remanence, or the anomalies may be due to sources covered by a less magnetic sedimentary cover. The petrophysical database will be a necessity when modeling the bedrock of Uganda in three dimensions at any scale. The lithological and petrophysical databases, as well as the samples collected, will further serve as a very valuable and important basis of and provide tools for future studies on the bedrock cover of Uganda. They can be used, for example, for bedrock mapping, prospecting of valuable mineralizations, dimension stones and for environmental studies. The samples could also serve as basis for establishing a lithogeochemical database of Uganda. It is clear that the data and samples are already commercially valuable for numerous prospecting companies working in Uganda. Thus, it is important that the samples and databases are carefully, safely and permanently archived and stored for future use.

  19. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  20. Household Products Database: Pesticides

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database 2001-2015 by DeLima Associates. All rights reserved. ...

  1. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  2. Network II Database

    Energy Science and Technology Software Center (ESTSC)

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  3. ECOTOX DATABASE SYSTEM

    EPA Science Inventory

    The ECOTOXicology database is a source for locating single chemical toxicity data for aquatic life, terrestrial plants and wildlife. ECOTOX integrates three toxicology effects databases: AQUIRE (aquatic life), PHYTOTOX (terrestrial plants), and TERRETOX (terrestrial wildlife). Th...

  4. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  5. Scopus database: a review

    PubMed Central

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216

  6. Plant and Crop Databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Databases have become an integral part of all aspects of biological research, including basic and applied plant biology. The importance of databases continues to increase as the volume of data from direct and indirect genomics approaches expands. What is not always obvious to users of databases is t...

  7. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  8. The 2011 Nucleic Acids Research Database Issue and the online Molecular Biology Database Collection.

    PubMed

    Galperin, Michael Y; Cochrane, Guy R

    2011-01-01

    The current 18th Database Issue of Nucleic Acids Research features descriptions of 96 new and 83 updated online databases covering various areas of molecular biology. It includes two editorials, one that discusses COMBREX, a new exciting project aimed at figuring out the functions of the 'conserved hypothetical' proteins, and one concerning BioDBcore, a proposed description of the 'minimal information about a biological database'. Papers from the members of the International Nucleotide Sequence Database collaboration (INSDC) describe each of the participating databases, DDBJ, ENA and GenBank, principles of data exchange within the collaboration, and the recently established Sequence Read Archive. A testament to the longevity of databases, this issue includes updates on the RNA modification database, Definition of Secondary Structure of Proteins (DSSP) and Homology-derived Secondary Structure of Proteins (HSSP) databases, which have not been featured here in >12 years. There is also a block of papers describing recent progress in protein structure databases, such as Protein DataBank (PDB), PDB in Europe (PDBe), CATH, SUPERFAMILY and others, as well as databases on protein structure modeling, protein-protein interactions and the organization of inter-protein contact sites. Other highlights include updates of the popular gene expression databases, GEO and ArrayExpress, several cancer gene databases and a detailed description of the UK PubMed Central project. The Nucleic Acids Research online Database Collection, available at: http://www.oxfordjournals.org/nar/database/a/, now lists 1330 carefully selected molecular biology databases. The full content of the Database Issue is freely available online at the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). PMID:21177655

  9. Environmental databases and other computerized information tools

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    1995-01-01

    Increasing environmental legislation has brought about the development of many new environmental databases and software application packages to aid in the quest for environmental compliance. These databases and software packages are useful tools and applicable to a wide range of environmental areas from atmospheric modeling to materials replacement technology. The great abundance of such products and services can be very overwhelming when trying to identify the tools which best meet specific needs. This paper will discuss the types of environmental databases and software packages available. This discussion will also encompass the affected environmental areas of concern, product capabilities, and hardware requirements for product utilization.

  10. NCSL National Measurement Interlaboratory Comparison Database requirements

    SciTech Connect

    WHEELER,JAMES C.; PETTIT,RICHARD B.

    2000-04-20

    With the recent development of an International Comparisons Database which provides worldwide access to measurement comparison data between National Measurement Institutes, there is currently renewed interest in developing a database of comparisons for calibration laboratories within a country. For many years, the National Conference of Standards Laboratories (NCSL), through the Measurement Comparison Programs Committee, has sponsored Interlaboratory Comparisons in a variety of measurement areas. This paper will discuss the need for such a National database which catalogues and maintains Interlaboratory Comparisons data. The paper will also discuss future requirements in this area.

  11. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are

  12. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  13. IDPredictor: predict database links in biomedical database.

    PubMed

    Mehlhorn, Hendrik; Lange, Matthias; Scholz, Uwe; Schreiber, Falk

    2012-01-01

    Knowledge found in biomedical databases, in particular in Web information systems, is a major bioinformatics resource. In general, this biological knowledge is worldwide represented in a network of databases. These data is spread among thousands of databases, which overlap in content, but differ substantially with respect to content detail, interface, formats and data structure. To support a functional annotation of lab data, such as protein sequences, metabolites or DNA sequences as well as a semi-automated data exploration in information retrieval environments, an integrated view to databases is essential. Search engines have the potential of assisting in data retrieval from these structured sources, but fall short of providing a comprehensive knowledge except out of the interlinked databases. A prerequisite of supporting the concept of an integrated data view is to acquire insights into cross-references among database entities. This issue is being hampered by the fact, that only a fraction of all possible cross-references are explicitely tagged in the particular biomedical informations systems. In this work, we investigate to what extend an automated construction of an integrated data network is possible. We propose a method that predicts and extracts cross-references from multiple life science databases and possible referenced data targets. We study the retrieval quality of our method and report on first, promising results. The method is implemented as the tool IDPredictor, which is published under the DOI 10.5447/IPK/2012/4 and is freely available using the URL: http://dx.doi.org/10.5447/IPK/2012/4. PMID:22736059

  14. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  15. Tank Characterization Database (TCD) Data Dictionary: Version 4.0

    SciTech Connect

    1996-04-01

    This document is the data dictionary for the tank characterization database (TCD) system and contains information on the data model and SYBASE{reg_sign} database structure. The first two parts of this document are subject areas based on the two different areas of the (TCD) database: sample analysis and waste inventory. Within each subject area is an alphabetical list of all the database tables contained in the subject area. Within each table defintiion is a brief description of the table and alist of field names and attributes. The third part, Field Descriptions, lists all field names in the data base alphabetically.

  16. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  17. Indexing in temporal databases

    SciTech Connect

    Novikov, B.A.

    1995-03-01

    The concepts of temporal databases and supporting physical structures are discussed. Most of the known access methods for temporal databases are variations of ordinary one-dimensional access methods and actually ignore time-dimensional features. An exception is TSB-trees, which support queries of different types. A modification of TSB-trees based on a trie hashing scheme and exceeding TSB-trees for speed of search in the actual part of a database is described.

  18. IPSec Database Query Acceleration

    NASA Astrophysics Data System (ADS)

    Ferrante, Alberto; Chandra, Satish; Piuri, Vincenzo

    IPSec is a suite of protocols that adds security to communications at the IP level. Protocols within IPSec make extensive use of two databases, namely the Security Policy Database (SPD) and the Security Association Database (SAD). The ability to query the SPD quickly is fundamental as this operation needs to be done for each incoming or outgoing IP packet, even if no IPSec processing needs to be applied on it. This may easily result in millions of query per second in gigabit networks.

  19. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  20. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  1. Nuclear Science References Database

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-01

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center http://www.nndc.bnl.gov/nsr.

  2. Backing up DMF Databases

    NASA Technical Reports Server (NTRS)

    Cardo, Nicholas P.; Woodrow, Thomas (Technical Monitor)

    1994-01-01

    A complete backup of the Cray Data Migration Facility (DMF) databases should include the data migration databases, all media specific process' (MSP's) databases, and the journal file. The backup should be able to accomplished without impacting users or stopping DMF. The High Speed Processors group at the Numerical Aerodynamics Simulation (NAS) Facility at NASA Ames Research Center undertook the task of finding an effective and efficient way to backup all DMF databases. This has been accomplished by taking advantage of new features introduced in DMF 2.0 and adding a minor modification to the dmdaemon. This paper discusses the investigation and the changes necessary to implement these enhancements.

  3. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  4. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  5. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  6. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  7. The world bacterial biogeography and biodiversity through databases: a case study of NCBI Nucleotide Database and GBIF Database.

    PubMed

    Selama, Okba; James, Phillip; Nateche, Farida; Wellington, Elizabeth M H; Hacène, Hocine

    2013-01-01

    Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record). These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases. PMID:24228241

  8. The World Bacterial Biogeography and Biodiversity through Databases: A Case Study of NCBI Nucleotide Database and GBIF Database

    PubMed Central

    James, Phillip; Nateche, Farida; Wellington, Elizabeth M. H.; Hacène, Hocine

    2013-01-01

    Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record). These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases. PMID:24228241

  9. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  10. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  11. First Look: TRADEMARKSCAN Database.

    ERIC Educational Resources Information Center

    Fernald, Anne Conway; Davidson, Alan B.

    1984-01-01

    Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and

  12. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  13. Assignment to database industy

    NASA Astrophysics Data System (ADS)

    Abe, Kohichiroh

    Various kinds of databases are considered to be essential part in future large sized systems. Information provision only by databases is also considered to be growing as the market becomes mature. This paper discusses how such circumstances have been built and will be developed from now on.

  14. BioImaging Database

    SciTech Connect

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  15. Build Your Own Database.

    ERIC Educational Resources Information Center

    Jacso, Peter; Lancaster, F. W.

    This book is intended to help librarians and others to produce databases of better value and quality, especially if they have had little previous experience in database construction. Drawing upon almost 40 years of experience in the field of information retrieval, this book emphasizes basic principles and approaches rather than in-depth and…

  16. The intelligent database machine

    NASA Technical Reports Server (NTRS)

    Yancey, K. E.

    1985-01-01

    The IDM data base was compared with the data base crack to determine whether IDM 500 would better serve the needs of the MSFC data base management system than Oracle. The two were compared and the performance of the IDM was studied. Implementations that work best on which database are implicated. The choice is left to the database administrator.

  17. Database Reviews: Legal Information.

    ERIC Educational Resources Information Center

    Seiser, Virginia

    Detailed reviews of two legal information databases--"Laborlaw I" and "Legal Resource Index"--are presented in this paper. Each database review begins with a bibliographic entry listing the title; producer; vendor; cost per hour contact time; offline print cost per citation; time period covered; frequency of updates; and size of file. A detailed…

  18. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  19. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  20. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  1. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  2. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design

  3. Database Searching by Managers.

    ERIC Educational Resources Information Center

    Arnold, Stephen E.

    Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

  4. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  5. National Vulnerability Database (NVD)

    National Institute of Standards and Technology Data Gateway

    National Vulnerability Database (NVD) (Web, free access)   NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.

  6. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  7. The NMT-5 criticality database

    SciTech Connect

    Cort, B.; Perkins, B.; Cort, G.

    1995-05-01

    The NMT-5 Criticality Database maintains criticality-related data and documentation to ensure the safety of workers handling special nuclear materials at the Plutonium Facility (TA-55) at Los Alamos National Laboratory. The database contains pertinent criticality safety limit information for more than 150 separate locations at which special nuclear materials are handled. Written in 4th Dimension for the Macintosh, it facilitates the production of signs for posting at these areas, tracks the history of postings and related authorizing documentation, and generates in Microsoft Word a current, comprehensive representation of all signs and supporting documentation, such as standard operating procedures and signature approvals. It facilitates the auditing process and is crucial to full and effective compliance with Department of Energy regulations. It has been recommended for installation throughout the Nuclear Materials Technology Division at Los Alamos.

  8. Protein sequence databases.

    PubMed

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H

    2004-02-01

    A variety of protein sequence databases exist, ranging from simple sequence repositories, which store data with little or no manual intervention in the creation of the records, to expertly curated universal databases that cover all species and in which the original sequence data are enhanced by the manual addition of further information in each sequence record. As the focus of researchers moves from the genome to the proteins encoded by it, these databases will play an even more important role as central comprehensive resources of protein information. Several the leading protein sequence databases are discussed here, with special emphasis on the databases now provided by the Universal Protein Knowledgebase (UniProt) consortium. PMID:15036160

  9. [Glaucoma Service Database].

    PubMed

    Jamrozy-Witkowska, Agnieszka M; Witkowski, Tomasz; Krzyzanowska, Patrycja

    2003-01-01

    We present the common problems related to clinical databases. The Glaucoma Service Database created in our clinic is an attempt of developing the optimal medical database. The system organizes our repository of clinical data. It consist of 3 modules: 1) the users list with predefined privileges and rights, 2) lists of coded data for further use, that facilitate filling in the fields, 3) clinical details of all patients. The user interface of our database is very simply, thus it is very easy to use it even by unskilled staff. The accuracy of data is protected by system's internal algorithms. It could be used to investigate clinical epidemiology, risk assessment, post-marketing surveillance of drugs, practice variation and decision analysis. Data from Glaucoma Service Database can also help in the management of health service. PMID:14969171

  10. Cascadia Tsunami Deposit Database

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  11. Hazard Analysis Database Report

    SciTech Connect

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  12. Database similarity searches.

    PubMed

    Plewniak, Frédéric

    2008-01-01

    With genome sequencing projects producing huge amounts of sequence data, database sequence similarity search has become a central tool in bioinformatics to identify potentially homologous sequences. It is thus widely used as an initial step for sequence characterization and annotation, phylogeny, genomics, transcriptomics, and proteomics studies. Database similarity search is based upon sequence alignment methods also used in pairwise sequence comparison. Sequence alignment can be global (whole sequence alignment) or local (partial sequence alignment) and there are algorithms to find the optimal alignment given particular comparison criteria. However, as database searches require the comparison of the query sequence with every single sequence in the database, heuristic algorithms have been designed to reduce the time required to build an alignment that has a reasonable chance to be the best one. Such algorithms have been implemented as fast and efficient programs (Blast, FastA) available in different types to address different kinds of problems. After searching the appropriate database, similarity search programs produce a list of similar sequences and local alignments. These results should be carefully examined before coming to any conclusion, as many traps await the similarity seeker: paralogues, multidomain proteins, pseudogenes, etc. This chapter presents points that should always be kept in mind when performing database similarity searches for various goals. It ends with a practical example of sequence characterization from a single protein database search using Blast. PMID:18592192

  13. ResPlan Database

    NASA Technical Reports Server (NTRS)

    Zellers, Michael L.

    2003-01-01

    The main project I was involved in was new application development for the existing CIS0 Database (ResPlan). This database application was developed in Microsoft Access. Initial meetings with Greg Follen, Linda McMillen, Griselle LaFontaine and others identified a few key weaknesses with the existing database. The weaknesses centered around that while the database correctly modeled the structure of Programs, Projects and Tasks, once the data was entered, the database did not capture any dynamic status information, and as such was of limited usefulness. After the initial meetings my goals were identified as follows: Enhance the ResPlan Database to include qualitative and quantitative status information about the Programs, Projects and Tasks Train staff members about the ResPlan database from both the user perspective and the developer perspective Give consideration to a Web Interface for reporting. Initially, the thought was that there would not be adequate time to actually develop the Web Interface, Greg wanted it understood that this was an eventual goal and as such should be a consideration throughout the development process.

  14. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  15. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  16. Database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1991-01-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  17. The Gaia Parameter Database

    NASA Astrophysics Data System (ADS)

    de Bruijne, J. H. J.; Lammers, U.; Perryman, M. A. C.

    2005-01-01

    The parallel development of many aspects of a complex mission like Gaia, which includes numerous participants in ESA, industrial companies, and a large and active scientific collaboration throughout Europe, makes keeping track of the many design changes, instrument and operational complexities, and numerical values for the data analysis a very challenging problem. A comprehensive, easily-accessible, up-to-date, and definitive compilation of a large range of numerical quantities is required, and the Gaia parameter database has been established to satisfy these needs. The database is a centralised repository containing, besides mathematical, physical, and astronomical constants, many satellite and subsystem design parameters. At the end of 2004, more than 1600 parameters had been included. Version control has been implemented, providing, next to a `live' version with the most recent parameters, well-defined reference versions of the full database contents. The database can be queried or browsed using a regular Web browser (http://www.rssd.esa.int/Gaia/paramdb). Query results are formated by default in HTML. Data can also be retrieved as Fortran-77, Fortran-90, Java, ANSIC, C++, or XML structures for direct inclusion into software codes in these languages. The idea is that all collaborating scientists can use the database parameters and values, once retrieved, directly linked to computational routines. An off-line access mode is also available, enabling users to automatically download the contents of the database. The database will be maintained actively, and significant extensions of the contents are planned. Consistent use in the future of the database by the Gaia community at large, including all industrial teams, will ensure correct numerical values throughout the complex software systems being built up as details of the Gaia design develop. The database is already being used for the telemetry simulation chain in ESTEC, and in the data simulations for GDAAS2.

  18. Numeric Databases in the Sciences.

    ERIC Educational Resources Information Center

    Meschel, S. V.

    1984-01-01

    Provides exploration into types of numeric databases available (also known as source databases, nonbibliographic databases, data-files, data-banks, fact banks); examines differences and similarities between bibliographic and numeric databases; identifies disciplines that utilize numeric databases; and surveys representative examples in the…

  19. Databases for materials selection

    SciTech Connect

    1996-06-01

    The Cambridge Materials Selector (CMS2.0) materials database was developed by the Engineering Dept. at Cambridge University in the United Kingdom. This database makes it possible to select a material for a specific application from essentially all classes of materials. Genera, Predict, and Socrates software programs from CLI International, Houston, Texas, automate materials selection and corrosion problem-solving tasks. They are said to significantly reduce the time necessary to select a suitable material and/or to assess a corrosion problem and reach cost-effective solutions. This article describes both databases and tells how to use them.

  20. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  1. International Comparisions Database

    National Institute of Standards and Technology Data Gateway

    International Comparisions Database (Web, free access)   The International Comparisons Database (ICDB) serves the U.S. and the Inter-American System of Metrology (SIM) with information based on Appendices B (International Comparisons), C (Calibration and Measurement Capabilities) and D (List of Participating Countries) of the Comit� International des Poids et Mesures (CIPM) Mutual Recognition Arrangement (MRA). The official source of the data is The BIPM key comparison database. The ICDB provides access to results of comparisons of measurements and standards organized by the consultative committees of the CIPM and the Regional Metrology Organizations.

  2. JICST Factual Database(2)

    NASA Astrophysics Data System (ADS)

    Araki, Keisuke

    The computer programme, which builds atom-bond connection tables from nomenclatures, is developed. Chemical substances with their nomenclature and varieties of trivial names or experimental code numbers are inputted. The chemical structures of the database are stereospecifically stored and are able to be searched and displayed according to stereochemistry. Source data are from laws and regulations of Japan, RTECS of US and so on. The database plays a central role within the integrated fact database service of JICST and makes interrelational retrieval possible.

  3. NCCDPHP PUBLICATION DATABASE

    EPA Science Inventory

    This database provides bibliographic citations and abstracts of publications produced by the CDC's National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) including journal articles, monographs, book chapters, reports, policy documents, and fact sheets. Full...

  4. TREATABILITY DATABASE DESCRIPTION

    EPA Science Inventory

    The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...

  5. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  6. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  7. Requirements Management Database

    Energy Science and Technology Software Center (ESTSC)

    2009-08-13

    This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

  8. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  9. Enhancing medical database security.

    PubMed

    Pangalos, G; Khair, M; Bozios, L

    1994-08-01

    A methodology for the enhancement of database security in a hospital environment is presented in this paper which is based on both the discretionary and the mandatory database security policies. In this way the advantages of both approaches are combined to enhance medical database security. An appropriate classification of the different types of users according to their different needs and roles and a User Role Definition Hierarchy has been used. The experience obtained from the experimental implementation of the proposed methodology in a major general hospital is briefly discussed. The implementation has shown that the combined discretionary and mandatory security enforcement effectively limits the unauthorized access to the medical database, without severely restricting the capabilities of the system. PMID:7829977

  10. The PHARMSEARCH database.

    PubMed

    O'Hara, M P; Pagis, C

    1991-02-01

    PHARMSEARCH, a database produced by the French Patent and Trademark Office (INPI), covers pharmaceutical patents issued by the Europeans, French, and United States patent offices from November 1986 onward. PHARMSEARCH is composed of MPHARM, a structure file searchable using Markush DARC software, and PHARM, the companion bibliographic file. Markush structures claimed in the patent documents are entered into the database as variable generic structures. Specific structures are also included in the database, when they are not part of a Markush structure in the patent document. Chemical index terms describe all moieties of the structure. Indexing also describes the therapeutic activities and preparation processes for the compounds. The indexing policies used in the production of this database are described. PMID:2026662

  11. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1995-06-01

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  12. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1995-02-01

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase-out of chemical compounds of environmental concern.

  13. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  14. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  15. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  16. Human mapping databases.

    PubMed

    Talbot, C; Cuticchia, A J

    2001-05-01

    This unit concentrates on the data contained within two human genome databasesGDB (Genome Database) and OMIM (Online Mendelian Inheritance in Man)and includes discussion of different methods for submitting and accessing data. An understanding of electronic mail, FTP, and the use of a World Wide Web (WWW) navigational tool such as Netscape or Internet Explorer is a prerequisite for utilizing the information in this unit. PMID:18428234

  17. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  18. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  19. SSME environment database development

    NASA Technical Reports Server (NTRS)

    Reardon, John

    1987-01-01

    The internal environment of the Space Shuttle Main Engine (SSME) is being determined from hot firings of the prototype engines and from model tests using either air or water as the test fluid. The objectives are to develop a database system to facilitate management and analysis of test measurements and results, to enter available data into the the database, and to analyze available data to establish conventions and procedures to provide consistency in data normalization and configuration geometry references.

  20. Open systems and databases

    SciTech Connect

    Martire, G.S. ); Nuttall, D.J.H. )

    1993-05-01

    This paper is part of a series of papers invited by the IEEE POWER CONTROL CENTER WORKING GROUP concerning the changing designs of modern control centers. Papers invited by the Working Group discuss the following issues: Benefits of Openness, Criteria for Evaluating Open EMS Systems, Hardware Design, Configuration Management, Security, Project Management, Databases, SCADA, Inter- and Intra-System Communications and Man-Machine Interfaces,'' The goal of this paper is to provide an introduction to the issues pertaining to Open Systems and Databases.'' The intent is to assist understanding of some of the underlying factors that effect choices that must be made when selecting a database system for use in a control room environment. This paper describes and compares the major database information models which are in common use for database systems and provides an overview of SQL. A case for the control center community to follow the workings of the non-formal standards bodies is presented along with possible uses and the benefits of commercially available databases within the control center. The reasons behind the emergence of industry supported standards organizations such as the Open Software Foundation (OSF) and SQL Access are presented.

  1. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  2. The Halophile Protein Database

    PubMed Central

    Sharma, Naveen; Farooqi, Mohammad Samir; Chaturvedi, Krishna Kumar; Lal, Shashi Bhushan; Grover, Monendra; Rai, Anil; Pandey, Pankaj

    2014-01-01

    Halophilic archaea/bacteria adapt to different salt concentration, namely extreme, moderate and low. These type of adaptations may occur as a result of modification of protein structure and other changes in different cell organelles. Thus proteins may play an important role in the adaptation of halophilic archaea/bacteria to saline conditions. The Halophile protein database (HProtDB) is a systematic attempt to document the biochemical and biophysical properties of proteins from halophilic archaea/bacteria which may be involved in adaptation of these organisms to saline conditions. In this database, various physicochemical properties such as molecular weight, theoretical pI, amino acid composition, atomic composition, estimated half-life, instability index, aliphatic index and grand average of hydropathicity (Gravy) have been listed. These physicochemical properties play an important role in identifying the protein structure, bonding pattern and function of the specific proteins. This database is comprehensive, manually curated, non-redundant catalogue of proteins. The database currently contains 59 897 proteins properties extracted from 21 different strains of halophilic archaea/bacteria. The database can be accessed through link. Database URL: http://webapp.cabgrid.res.in/protein/ PMID:25468930

  3. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  4. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  5. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  6. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  7. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.

    2011-01-01

    With three missions outstanding, the Shuttle Hypervelocity Impact Database has nearly 3000 entries. The data is divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. Details and insights on the contents of the database including examples of descriptive statistics will be provided. Post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will also be discussed. Potential enhancements to the database structure and availability of the data for other researchers will be addressed in the Future Work section. A related database of returned surfaces from the International Space Station will also be introduced.

  8. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James I.; Christiansen, Eric I.; Lear, Dana M.

    2011-01-01

    With three flights remaining on the manifest, the shuttle impact hypervelocity database has over 2800 entries. The data is currently divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. The paper will provide details and insights on the contents of the database including examples of descriptive statistics using the impact data. A discussion of post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will be presented. Future work to be discussed will be possible enhancements to the database structure and availability of the data for other researchers. A related database of ISS returned surfaces that are under development will also be introduced.

  9. Indian genetic disease database.

    PubMed

    Pradhan, Sanchari; Sengupta, Mainak; Dutta, Anirban; Bhattacharyya, Kausik; Bag, Sumit K; Dutta, Chitra; Ray, Kunal

    2011-01-01

    Indians, representing about one-sixth of the world population, consist of several thousands of endogamous groups with strong potential for excess of recessive diseases. However, no database is available on Indian population with comprehensive information on the diseases common in the country. To address this issue, we present Indian Genetic Disease Database (IGDD) release 1.0 (http://www.igdd.iicb.res.in)--an integrated and curated repository of growing number of mutation data on common genetic diseases afflicting the Indian populations. Currently the database covers 52 diseases with information on 5760 individuals carrying the mutant alleles of causal genes. Information on locus heterogeneity, type of mutation, clinical and biochemical data, geographical location and common mutations are furnished based on published literature. The database is currently designed to work best with Internet Explorer 8 (optimal resolution 1440 900) and it can be searched based on disease of interest, causal gene, type of mutation and geographical location of the patients or carriers. Provisions have been made for deposition of new data and logistics for regular updation of the database. The IGDD web portal, planned to be made freely available, contains user-friendly interfaces and is expected to be highly useful to the geneticists, clinicians, biologists and patient support groups of various genetic diseases. PMID:21037256

  10. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  11. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  12. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  13. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  14. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  15. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  16. National Ambient Radiation Database

    SciTech Connect

    Dziuban, J.; Sears, R.

    2003-02-25

    The U.S. Environmental Protection Agency (EPA) recently developed a searchable database and website for the Environmental Radiation Ambient Monitoring System (ERAMS) data. This site contains nationwide radiation monitoring data for air particulates, precipitation, drinking water, surface water and pasteurized milk. This site provides location-specific as well as national information on environmental radioactivity across several media. It provides high quality data for assessing public exposure and environmental impacts resulting from nuclear emergencies and provides baseline data during routine conditions. The database and website are accessible at www.epa.gov/enviro/. This site contains (1) a query for the general public which is easy to use--limits the amount of information provided, but includes the ability to graph the data with risk benchmarks and (2) a query for a more technical user which allows access to all of the data in the database, (3) background information on ER AMS.

  17. The PROSITE database.

    PubMed

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S; Pagni, Marco; Sigrist, Christian J A

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at http://www.expasy.org/prosite/. PMID:16381852

  18. The PROSITE database

    PubMed Central

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S.; Pagni, Marco; Sigrist, Christian J. A.

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at . PMID:16381852

  19. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  20. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  1. The Genopolis Microarray Database

    PubMed Central

    Splendiani, Andrea; Brandizi, Marco; Even, Gael; Beretta, Ottavio; Pavelka, Norman; Pelizzola, Mattia; Mayhaus, Manuel; Foti, Maria; Mauri, Giancarlo; Ricciardi-Castagnoli, Paola

    2007-01-01

    Background Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood. Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions. Results The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users. Conclusion The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local database and a public repository, where the development of a common coherent annotation is important. In its current implementation, it provides a uniform coherently annotated dataset on dendritic cells and macrophage differentiation. PMID:17430566

  2. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  3. DataBase on Demand

    NASA Astrophysics Data System (ADS)

    Gaspar Aparicio, R.; Gomez, D.; Coterillo Coz, I.; Wojcik, D.

    2012-12-01

    At CERN a number of key database applications are running on user-managed MySQL database services. The database on demand project was born out of an idea to provide the CERN user community with an environment to develop and run database services outside of the actual centralised Oracle based database services. The Database on Demand (DBoD) empowers the user to perform certain actions that had been traditionally done by database administrators, DBA's, providing an enterprise platform for database applications. It also allows the CERN user community to run different database engines, e.g. presently open community version of MySQL and single instance Oracle database server. This article describes a technology approach to face this challenge, a service level agreement, the SLA that the project provides, and an evolution of possible scenarios.

  4. Proteomics: Protein Identification Using Online Databases

    ERIC Educational Resources Information Center

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory

  5. First Look--The Biobusiness Database.

    ERIC Educational Resources Information Center

    Cunningham, Ann Marie

    1986-01-01

    Presents overview prepared by producer of database newly available in 1985 that covers six broad subject areas: genetic engineering and bioprocessing, pharmaceuticals, medical technology and instrumentation, agriculture, energy and environment, and food and beverages. Background, indexing, record format, use of BioBusiness, and 1986 enhancements

  6. Future Enhancements for Full Text Databases.

    ERIC Educational Resources Information Center

    Detemple, Wendelin

    1989-01-01

    Discusses expected future developments in full text databases (FTDs) in three main areas: host-based menu-driven systems; menu and front end interfaces on gateway and mailbox services; and microcomputer or front end computer-based systems which already have good retrieval interactions and input output processing features. (24 references)…

  7. First Look--The Biobusiness Database.

    ERIC Educational Resources Information Center

    Cunningham, Ann Marie

    1986-01-01

    Presents overview prepared by producer of database newly available in 1985 that covers six broad subject areas: genetic engineering and bioprocessing, pharmaceuticals, medical technology and instrumentation, agriculture, energy and environment, and food and beverages. Background, indexing, record format, use of BioBusiness, and 1986 enhancements…

  8. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  9. Proteomics: Protein Identification Using Online Databases

    ERIC Educational Resources Information Center

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory…

  10. LBL Perspective on statistical database management

    SciTech Connect

    Wong, H.K.T.

    1982-12-01

    The purpose of this document is to present a collective view of our research to outside researchers in statistical database management to facilitate communication and exchange of ideas in this new and exciting area. Two papers are in the General category, Arie Shoshani's survey of statistical database management research problems and John L. McCarthy et al.'s description of SEEDIS, a statistical database management system developed in CSAM. In User Interface, we selected Paul Chan and Arie Shoshani's paper on SUBJECT, a system that offers a directory driven interface to statistical data, and Harry K.T. Wong and Ivy Kuo's paper on GUIDE, a system using graphical user interface to complex databases for non-expert users. The SUBJECT paper describes some powerful modelling primitives for statistical summary data. In addition to the description of the system, the GUIDE paper also discusses the reasons why the current query systems fail to provide satisfactory interface to complex databases. The paper by Fred Gey describes the problems of data definitions of large statistical databases, citing real situations from the 1980 US Census data as examples. Deane Merrill's paper discusses more specific problems of handling statistical summary data, with emphasis on the problem and his solution of summary data aggregation and disaggregation. Peter Krep's paper describes a data model called semantic core model which contains useful modelling primitives for representing other semantic constructs. Modelling statistical databases is one of the major motivations of this work. The requirements of modelling metadata of statistical databases are given a thorough treatment in John L. McCarthy's paper.

  11. A skeletal gene database.

    PubMed

    Ho, N C; Jia, L; Driscoll, C C; Gutter, E M; Francomano, C A

    2000-11-01

    Systematic organization of documented data coupled with ready accessibility is of great value to research. Catalogs and databases are created specifically to meet this purpose. The Skeletal Gene Database evolves as part of the Skeletal Genome Anatomy Project (SGAP), an ongoing multi-institute collaborative effort, to study the functional genome of bone and other skeletal tissues. The primary objective of the Skeletal Gene Database is to create a contemporary list of skeletal-related genes, offering the following information for each gene: gene name, protein name, cellular function, disease(s) caused by mutation of the corresponding gene, chromosomal location, LocusLink number, gene size, exon/intron numbers, messenger RNA (mRNA) coding region size, protein size/molecular weight, Online Mendelian Inheritance in Man (OMIM) number of the gene, UniGene assignment, and PubMed reference. The database includes genes already known and published in the literature as well as novel genes not yet characterized but known to be expressed in skeletal tissue. It will be posted on the web for easy access and swift referencing. The data will be updated in tempo with current and future research, thereby providing an invaluable service to the scientific community interested in obtaining information on bone-related genes. PMID:11092392

  12. NATIONAL NUTRIENTS DATABASE

    EPA Science Inventory

    Resource Purpose:The Nutrient Criteria Program has initiated development of a National relational database application that will be used to store and analyze nutrient data. The ultimate use of these data will be to derive ecoregion- and waterbody-specific numeric nutrient...

  13. The ADAMS database language

    SciTech Connect

    Pfaltz, J.L.; French, J.C.; Grimshaw, A.; Son, Sang H.; Baron, P.; Janet, S.; Kim, A.; Klumpp, C.; Lin, Yi; Lloyd, L.

    1989-02-28

    ADAMS provides a mechanism for applications programs, written in many languages, to define and access common persistent databases. The basic constructs are element, class, set, map, attribute and codomain. From these the user may define new data structures and new data classes belonging to a semantic hierarchy that supports multiple inheritance. 7 refs., 2 figs.

  14. AIDSinfo Drug Database

    MedlinePlus

    ... related drugs for health care providers and patients. Search the Drug Database Help × Search by drug name Performs a search by drug ... common names of the drug. Drug Class: Help × Search by drug class Select a specific drug class ...

  15. Diatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 114 Diatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.

  16. WHITHER BIOLOGICAL DATABASE RESEARCH?

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We consider how the landscape of biological databases may evolve in the future, and what research is needed to realize this evolution. We suggest today's dispersal of diverse resources will only increase as the number and size of those resources, driving the need for semantic interoperability even ...

  17. NATIONAL ASSESSMENT DATABASE (NAD)

    EPA Science Inventory

    Resource Purpose:The National Assessment Database stores State water quality assessments that are reported under Section 305(b) of the Clean Water Act. The data are stored by individual water quality assessments. Threatened, partially and not supporting waters also have da...

  18. GENERAL PERMITS DATABASE

    EPA Science Inventory

    Resource Purpose:This database was used to provide permit writers with a library of examples for writing general permits. It has not been maintained and is outdated and will be removed. Water Permits Division is trying to determine whether or not to recreate this databas...

  19. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  20. ENVIRONMENTAL FATE DATABASE (ENVIROFATE)

    EPA Science Inventory

    The Environmental Fate Database contains more than 13,000 records of information on the environmental fate or behavior (i.e., transport and degradation) of approximately 800 chemical released into the environment. Chemicals selected for inclusion are produced in quantities exceed...

  1. "Prostate Cancer Proteomics" Database

    PubMed Central

    Shishkin, S.S.; Kovalyov, L.I.; Kovalyova, M.A.; Lisitskaya, K.V.; Eremina, L.S.; Ivanov, A.V.; Gerasimov, E.V.; Sadykhov, E.G.; Ulasova, N.Y.; Sokolova, O.S.; Toropygin, I.Y.; Popov, V.O.

    2010-01-01

    A database of Prostate Cancer Proteomics has been created by using the results of a proteomic study of human prostate carcinoma and benign hyperplasia tissues, and of some human-cultured cell lines (PCP, http://ef.inbi.ras.ru). PCP consists of 7 interrelated modules, each containing four levels of proteomic and biomedical data on the proteins in corresponding tissues or cells. The first data level, onto which each module is based, is a 2DE proteomic reference map where proteins separated by 2D electrophoresis, and subsequently identified by mass-spectrometry, are marked. The results of proteomic experiments form the second data level. The third level contains protein data from published articles and existing databases. The fourth level is formed with direct Internet links to the information on corresponding proteins in the NCBI and UniProt databases. PCP contains data on 359 proteins in total, including 17 potential biomarkers of prostate cancer, particularly AGR2, annexins, S100 proteins, PRO2675, and PRO2044. The database will be useful in a wide range of applications, including studies of molecular mechanisms of the aetiology and pathogenesis of prostate diseases, finding new diagnostic markers, etc. PMID:22649669

  2. PESTICIDE USE REPORT DATABASE

    EPA Science Inventory

    This dataset summarizes pesticide use in California for year 1990-96 as extracted from the Pesticide Use Report (PUR) by county. The PUR is a comprehensive database of Pesticide Use in the state of California supplied by the DPR (California Department of Pesticide Regulation).

  3. ECOREGION SPATIAL DATABASE

    EPA Science Inventory

    This spatial database contains boundaries and attributes describing Level III ecoregions in EPA Region 8. The ecoregions shown here have been derived from Omernik (1987) and from refinements of Omernik's framework that have been made for other projects. These ongoing or re...

  4. The Radiation Hybrid Database.

    PubMed Central

    Lijnzaad, P; Helgesen, C; Rodriguez-Tomé, P

    1998-01-01

    Since July 1995, the European Bioinformatics Institute (EBI) has maintained RHdb (http://www.ebi.ac.uk/RHdb/RHdb.html ), a public database for radiation hybrid data. Radiation hybrid mapping is an important technique for determining high resolution maps. Recently, CORBA access has been added to Rhdb. The EBI is an Outstation of the European Molecular Biology Laboratory (EMBL). PMID:9399810

  5. Redis database administration tool

    Energy Science and Technology Software Center (ESTSC)

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  6. WETLANDS TREATMENT DATABASE

    EPA Science Inventory

    The U.S. EPA sponsored a project to collect and catalog information from wastewater treatment wetlands into a computer database. PA has also written a user friendly, stand-alone, menu-driven computer program to allow anyone with DOS 3.3 or higher to access the information in the ...

  7. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  8. NATIONAL CONTAMINANT OCCURRENCE DATABASE

    EPA Science Inventory

    Resource Purpose:Under the 1996 Safe Drinking Water Act Amendments, EPA is to assemble a National Drinking Water Occurrence Database (NCOD) by August 1999. The NCOD is a collection of data of documented quality on unregulated and regulated chemical, radiological, microbia...

  9. Triatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 117 Triatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  10. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  11. Relational database telemanagement.

    PubMed

    Swinney, A R

    1988-05-01

    Dallas-based Baylor Health Care System recognized the need for a way to control and track responses to their marketing programs. To meet the demands of data management and analysis, and build a useful database of current customers and future prospects, the marketing department developed a system to capture, store and manage these responses. PMID:10286759

  12. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which

  13. LQTS gene LOVD database.

    PubMed

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-11-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2,SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently(KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database(http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians. PMID:20809527

  14. LQTS Gene LOVD Database

    PubMed Central

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-01-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2, SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently (KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database (http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians. © 2010 Wiley-Liss, Inc. PMID:20809527

  15. LHCb distributed conditions database

    NASA Astrophysics Data System (ADS)

    Clemencic, M.

    2008-07-01

    The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The main users of conditions are reconstruction and analysis processes, which are running on the Grid. To allow efficient access to the data, we need to use a synchronized replica of the content of the database located at the same site as the event data file, i.e. the LHCb Tier1. The replica to be accessed is selected from information stored on LFC (LCG File Catalog) and managed with the interface provided by the LCG developed library CORAL. The plan to limit the submission of jobs to those sites where the required conditions are available will also be presented. LHCb applications are using the Conditions Database framework on a production basis since March 2007. We have been able to collect statistics on the performance and effectiveness of both the LCG library COOL (the library providing conditions handling functionalities) and the distribution framework itself. Stress tests on the CNAF hosted replica of the Conditions Database have been performed and the results will be summarized here.

  16. The Indra Simulation Database

    NASA Astrophysics Data System (ADS)

    Falck, Bridget; Budavari, T.; Cole, S.; Crankshaw, D.; Dobos, L.; Lemson, G.; Neyrinck, M.; Szalay, A.; Wang, J.

    2011-05-01

    We present the Indra suite of cosmological N-body simulations and the design of its companion database. Indra consists of 512 different instances of a 1 Gpc/h-sided box, each with 100 million dark matter particles and the same input cosmology, enabling a characterization of very large-scale modes of the matter power spectrum with galaxy-scale mass resolution and an excellent handle on cosmic variance. Each simulation outputs 64 snapshots, giving over 100 TB of data for the full set of simulations, all of which will be loaded into a SQL database. We discuss the database design for the particle data, consisting of the positions and velocities of each particle; the FOF halos, with links to the particle data so that halo properties can be calculated within the database; and the density field on a power-of-two grid, which can be easily linked to each particle's Peano-Hilbert index. Initial performance tests and example queries will be given. The authors are grateful for support from the Gordon and Betty Moore and the W.M. Keck Foundations.

  17. Hydrocarbon Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  18. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.

  19. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the information to the database. It now consists of seven different categories of data (carpet cleaning, forms, NASA Event Schedules, training certifications, wall and vent cleaning, work schedules, and miscellaneous) . I also did some field inspecting with the supervisors around the site and was present at all of the training certification courses that have been scheduled since June 2004. My future outlook for the JDD, Inc. database is to have all of company s information from future contract proposals, weekly inventory, to employee timesheets all in this same database.

  20. Tautomerism in large databases

    PubMed Central

    Sitzmann, Markus; Ihlenfeldt, Wolf-Dietrich

    2010-01-01

    We have used the Chemical Structure DataBase (CSDB) of the NCI CADD Group, an aggregated collection of over 150 small-molecule databases totaling 103.5 million structure records, to conduct tautomerism analyses on one of the largest currently existing sets of real (i.e. not computer-generated) compounds. This analysis was carried out using calculable chemical structure identifiers developed by the NCI CADD Group, based on hash codes available in the chemoinformatics toolkit CACTVS and a newly developed scoring scheme to define a canonical tautomer for any encountered structure. CACTVSs tautomerism definition, a set of 21 transform rules expressed in SMIRKS line notation, was used, which takes a comprehensive stance as to the possible types of tautomeric interconversion included. Tautomerism was found to be possible for more than 2/3 of the unique structures in the CSDB. A total of 680 million tautomers were calculated from, and including, the original structure records. Tautomerism overlap within the same individual database (i.e. at least one other entry was present that was really only a different tautomeric representation of the same compound) was found at an average rate of 0.3% of the original structure records, with values as high as nearly 2% for some of the databases in CSDB. Projected onto the set of unique structures (by FICuS identifier), this still occurred in about 1.5% of the cases. Tautomeric overlap across all constituent databases in CSDB was found for nearly 10% of the records in the collection. PMID:20512400

  1. Building the GEM Faulted Earth database

    NASA Astrophysics Data System (ADS)

    Litchfield, N. J.; Berryman, K. R.; Christophersen, A.; Thomas, R. F.; Wyss, B.; Tarter, J.; Pagani, M.; Stein, R. S.; Costa, C. H.; Sieh, K. E.

    2011-12-01

    The GEM Faulted Earth project is aiming to build a global active fault and seismic source database with a common set of strategies, standards, and formats, to be placed in the public domain. Faulted Earth is one of five hazard global components of the Global Earthquake Model (GEM) project. A key early phase of the GEM Faulted Earth project is to build a database which is flexible enough to capture existing and variable (e.g., from slow interplate faults to fast subduction interfaces) global data, and yet is not too onerous to enter new data from areas where existing databases are not available. The purpose of this talk is to give an update on progress building the GEM Faulted Earth database. The database design conceptually has two layers, (1) active faults and folds, and (2) fault sources, and automated processes are being defined to generate fault sources. These include the calculation of moment magnitude using a user-selected magnitude-length or magnitude-area scaling relation, and the calculation of recurrence interval from displacement divided by slip rate, where displacement is calculated from moment and moment magnitude. The fault-based earthquake sources defined by the Faulted Earth project will then be rationalised with those defined by the other GEM global components. A web based tool is being developed for entering individual faults and folds, and fault sources, and includes capture of additional information collected at individual sites, as well as descriptions of the data sources. GIS shapefiles of individual faults and folds, and fault sources will also be able to be uploaded. A data dictionary explaining the database design rationale, definitions of the attributes and formats, and a tool user guide is also being developed. Existing national databases will be uploaded outside of the fault compilation tool, through a process of mapping common attributes between the databases. Regional workshops are planned for compilation in areas where existing databases are not available, or require further population, and will include training on using the fault compilation tool. The tool is also envisaged as an important legacy of the GEM Faulted Earth project, to be available for use beyond the end of the 2 year project.

  2. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    NASA Astrophysics Data System (ADS)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  3. Subject Retrieval from Full-Text Databases in the Humanities

    ERIC Educational Resources Information Center

    East, John W.

    2007-01-01

    This paper examines the problems involved in subject retrieval from full-text databases of secondary materials in the humanities. Ten such databases were studied and their search functionality evaluated, focusing on factors such as Boolean operators, document surrogates, limiting by subject area, proximity operators, phrase searching, wildcards,…

  4. Evaluation of Database Coverage: A Comparison of Two Methodologies.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1982-01-01

    Describes experiment which compared two techniques used for evaluating and comparing database coverage of a subject area, e.g., "bibliography" and "subject profile." Differences in time, cost, and results achieved are compared by applying techniques to field of volcanology using two databases, Geological Reference File and GeoArchive. Twenty…

  5. NASA aerospace database subject scope: An overview

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Outlined here is the subject scope of the NASA Aerospace Database, a publicly available subset of the NASA Scientific and Technical (STI) Database. Topics of interest to NASA are outlined and placed within the framework of the following broad aerospace subject categories: aeronautics, astronautics, chemistry and materials, engineering, geosciences, life sciences, mathematical and computer sciences, physics, social sciences, space sciences, and general. A brief discussion of the subject scope is given for each broad area, followed by a similar explanation of each of the narrower subject fields that follow. The subject category code is listed for each entry.

  6. ITER solid breeder blanket materials database

    SciTech Connect

    Billone, M.C.; Dienst, W.; Flament, T.; Lorenzetto, P.; Noda, K.; Roux, N.

    1993-11-01

    The databases for solid breeder ceramics (Li{sub 2},O, Li{sub 4}SiO{sub 4}, Li{sub 2}ZrO{sub 3} and LiAlO{sub 2}) and beryllium multiplier material are critically reviewed and evaluated. Emphasis is placed on physical, thermal, mechanical, chemical stability/compatibility, tritium, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Correlations are selected for design analysis and compared to the database. Areas for future research and development in blanket materials technology are highlighted and prioritized.

  7. Open geochemical database

    NASA Astrophysics Data System (ADS)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are available yet. We hope that open database will help exchanging geochemical information and call everybody for sharing the geochemical data. We also call for feedback on the structure, interface and operation of the database.

  8. Regulatory administrative databases in FDA's Center for Biologics Evaluation and Research: convergence toward a unified database.

    PubMed

    Smith, Jeffrey K

    2013-04-01

    Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database. PMID:23269527

  9. The EMBL Nucleotide Sequence Database.

    PubMed Central

    Stoesser, G; Sterk, P; Tuli, M A; Stoehr, P J; Cameron, G N

    1997-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences directly submitted from researchers and genome sequencing groups and collected from the scientific literature and patent applications. In collaboration with DDBJ and GenBank the database is produced, maintained and distributed at the European Bioinformatics Institute (EBI) and constitutes Europe's primary nucleotide sequence resource. Database releases are produced quarterly and are distributed on CD-ROM. EBI's network services allow access to the most up-to-date data collection via Internet and World Wide Web interface, providing database searching and sequence similarity facilities plus access to a large number of additional databases. PMID:9016493

  10. NATIVE HEALTH DATABASES: NATIVE HEALTH RESEARCH DATABASE (NHRD)

    EPA Science Inventory

    The Native Health Databases contain bibliographic information and abstracts of health-related articles, reports, surveys, and other resource documents pertaining to the health and health care of American Indians, Alaska Natives, and Canadian First Nations. The databases provide i...

  11. NATIVE HEALTH DATABASES: NATIVE HEALTH HISTORY DATABASE (NHHD)

    EPA Science Inventory

    The Native Health Databases contain bibliographic information and abstracts of health-related articles, reports, surveys, and other resource documents pertaining to the health and health care of American Indians, Alaska Natives, and Canadian First Nations. The databases provide i...

  12. Molecular interaction databases.

    PubMed

    Orchard, Sandra

    2012-05-01

    Molecular interaction databases are playing an ever more important role in our understanding of the biology of the cell. An increasing number of resources exist to provide these data and many of these have adopted the controlled vocabularies and agreed-upon standardised data formats produced by the Molecular Interaction workgroup of the Human Proteome Organization Proteomics Standards Initiative (HUPO PSI-MI). Use of these standards allows each resource to establish PSI Common QUery InterfaCe (PSICQUIC) service, making data from multiple resources available to the user in response to a single query. This cooperation between databases has been taken a stage further, with the establishment of the International Molecular Exchange (IMEx) consortium which aims to maximise the curation power of numerous data resources, and provide the user with a non-redundant, consistently annotated set of interaction data. PMID:22611057

  13. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  14. Facial plastic surgery database.

    PubMed

    Mendelsohn, M; Conrad, K

    1994-02-01

    Every facial plastic surgeon accumulates a vast library of professional slides and photographs that document his work. Manual cataloguing of the clinical and operative documentation is time consuming and provides limited analysis capabilities. The facial plastic surgery database is a state-of-the-art computer programme that allows the surgeon to sort and locate slides and photographs. Designed for the computer novice, it utilises a simple coding system to permit rapid data input. The codes can be tailored to allow for new procedures or alternative practice styles. There are sophisticated searching routines to quickly find slides and photographs based on any combination of patients and operative criteria. The database also includes an online colour atlas and workspace for recording of presentations. There are automated routines to analyse patients' clinical features, operative trends, and surgical results. Ultimately, examination of this data can be used to facilitate peer review, research, and self-education. PMID:8170012

  15. Observational Mishaps - a Database

    NASA Astrophysics Data System (ADS)

    von Braun, K.; Chiboucas, K.; Hurley-Keller, D.

    1999-05-01

    We present a World-Wide-Web-accessible database of astronomical images which suffer from a variety of observational problems. These problems range from common phenomena, such as dust grains on filters and/or dewar window, to more exotic cases like, for instance, deflated support airbags underneath the primary mirror. The purpose of this database is to enable astronomers at telescopes to save telescope time by discovering the nature of the trouble they might be experiencing with the help of this online catalog. Every observational mishap contained in this collection is presented in the form of a GIF image, a brief explanation of the problem, and, to the extent possible, a suggestion of what might be done to solve the problem and improve the image quality.

  16. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  17. Real Time Baseball Database

    NASA Astrophysics Data System (ADS)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  18. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  19. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  20. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-01-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. it consolidates and facilitates.access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. Curcumin Resource Database

    PubMed Central

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. Database URL: http://www.crdb.in PMID:26220923

  2. ARTI Refrigerant Database

    SciTech Connect

    Cain, J.M. , Great Falls, VA )

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  3. The Cambridge Structural Database.

    PubMed

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  4. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  5. The Cambridge Structural Database

    PubMed Central

    Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.

    2016-01-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  6. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  7. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  8. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  9. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and

  10. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  11. Statistical correlation between meteorological and rockfall databases

    NASA Astrophysics Data System (ADS)

    Delonca, A.; Gunzburger, Y.; Verdel, T.

    2014-08-01

    Rockfalls are a major and essentially unpredictable sources of danger, particularly along transportation routes (roads and railways). Thus, the assessment of their probability of occurrence is a major challenge for risk management. From a qualitative perspective, it is known that rockfalls occur mainly during periods of rain, snowmelt, or freeze-thaw. Nevertheless, from a quantitative perspective, these generally assumed correlations between rockfalls and their possible meteorological triggering events are often difficult to identify because (i) rockfalls are too rare for the use of classical statistical analysis techniques and (ii) not all intensities of triggering factors have the same probability. In this study, we propose a new approach for investigating the correlation of rockfalls with rain, freezing periods, and strong temperature variations. This approach is tested on three French rockfall databases, the first of which exhibits a high frequency of rockfalls (approximately 950 events over 11 years), whereas the other two databases are more typical (approximately 140 events over 11 years). These databases come from (1) national highway RN1 on Réunion, (2) a railway in Burgundy, and (3) a railway in Auvergne. Whereas a basic correlation analysis is only able to highlight an already obvious correlation in the case of the "rich" database, the newly suggested method appears to detect correlations even in the "poor" databases. Indeed, the use of this method confirms the positive correlation between rainfall and rockfalls in the Réunion database. This method highlights a correlation between cumulative rainfall and rockfalls in Burgundy, and it detects a correlation between the daily minimum temperature and rockfalls in the Auvergne database. This new approach is easy to use and also serves to determine the conditional probability of rockfall according to a given meteorological factor. The approach will help to optimize risk management in the studied areas based on their meteorological conditions.

  12. ThermoData Engine Database

    National Institute of Standards and Technology Data Gateway

    SRD 103 NIST ThermoData Engine Database (PC database for purchase)   ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.

  13. A Case for Database Filesystems

    SciTech Connect

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  14. High Temperature Superconducting Materials Database

    National Institute of Standards and Technology Data Gateway

    SRD 149 NIST High Temperature Superconducting Materials Database (Web, free access)   The NIST High Temperature Superconducting Materials Database (WebHTS) provides evaluated thermal, mechanical, and superconducting property data for oxides and other nonconventional superconductors.

  15. NEUSE RIVER WATER QUALITY DATABASE

    EPA Science Inventory

    The Neuse River water quality database is a Microsoft Access application that includes multiple data tables and some associated queries. The database was developed by Prof. Jim Bowen's research group.

  16. Design and implementation of secure medical database systems.

    PubMed

    Pangalos, G J

    1995-01-01

    Medical database security plays an important role in the overall security of medical information systems and networks. This is both because of the nature of this technology and its widespread use today. Database security not only involves fundamental ethical principles, but also essential prerequisites for effective medical care. The development of appropriate secure medical database design and implementation methodologies is an important research problem in the area and a necessary prerequisite for the successful development of such systems. The general framework and requirements for medical database security are given and a number of parameters of the secure medical database design and implementation problem are presented and discussed in this paper. A secure medical database development methodology is also presented which could help overcome some of the problems currently encountered. PMID:8882564

  17. SmallSat Database

    NASA Technical Reports Server (NTRS)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data required. When completed it will interface with the SCENIC environment to allow modeling of smallSats. The SmallSat Relational Database can also be integrated with the SCENIC Simulation modeling system that is currently in development. The SmallSat Relational Database simulation will be of great significance in assisting the NASA SCaN group to understand the impact the smallSats have made which have populated the lower orbit around our mother earth. What I have created and worked on this summer session 2015, is the basis for a tool that will be of value to the NASA SCaN SCENIC Simulation Environment for years to come.

  18. Corruption of genomic databases with anomalous sequence.

    PubMed Central

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-01-01

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%. PMID:1614861

  19. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  20. Hydrogen Leak Detection Sensor Database

    NASA Technical Reports Server (NTRS)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  1. A Forest Vegetation Database for Western Oregon

    USGS Publications Warehouse

    Busing, Richard T.

    2004-01-01

    Data on forest vegetation in western Oregon were assembled for 2323 ecological survey plots. All data were from fixed-radius plots with the standardized design of the Current Vegetation Survey (CVS) initiated in the early 1990s. For each site, the database includes: 1) live tree density and basal area of common tree species, 2) total live tree density, basal area, estimated biomass, and estimated leaf area; 3) age of the oldest overstory tree examined, 4) geographic coordinates, 5) elevation, 6) interpolated climate variables, and 7) other site variables. The data are ideal for ecoregional analyses of existing vegetation.

  2. Scientific and Technical Document Database

    National Institute of Standards and Technology Data Gateway

    NIST Scientific and Technical Document Database (PC database for purchase)   The images in NIST Special Database 20 contain a very rich set of graphic elements from scientific and technical documents, such as graphs, tables, equations, two column text, maps, pictures, footnotes, annotations, and arrays of such elements.

  3. Microbial Properties Database Editor Tutorial

    EPA Science Inventory

    A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbial-relevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pat...

  4. DOE technology information management system database study report

    SciTech Connect

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  5. High-integrity databases for helicopter operations

    NASA Astrophysics Data System (ADS)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  6. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as the Pistol Grip Tool (PGT) and the Battery Charger Module (BCM), while adding any recently closed EMU-related FIARs.

  7. The HITRAN molecular database

    NASA Astrophysics Data System (ADS)

    Rothman, Laurence S.; Gordon, Iouli E.

    2013-07-01

    This presentation provides an overview of the updates and extensions of the HITRAN molecular spectroscopic absorption database. The new significantly improved parameters for the major atmospheric absorbers (for instance H2O and O2) have been given particular attention. For most of the molecules, spectral parameters have been revised and updated. The new edition also features many new spectral bands and new isotopic species. The cross-section part of the database has also been significantly extended by adding new species as well as more temperature-pressure sets for existing species. In addition, HITRAN now provides the collision-induced absorption parameters, including those relevant to the terrestrial atmosphere: N2-N2, N2-O2, O2-O2. The study of the spectroscopic signatures of planetary atmospheres is a powerful tool for extracting detailed information concerning their constituents and thermodynamic properties. The HITRAN molecular spectroscopic database has traditionally served researchers involved with terrestrial atmospheric problems, such as remote sensing of constituents in the atmosphere, pollution monitoring at the surface, and numerous environmental issues. In collaboration with laboratories across the globe, an extensive effort is currently underway to extend the HITRAN database to have capabilities for investigating a variety of planetary atmospheres. Spectroscopic parameters for gases and spectral bands of molecules that are germane to the studies of planetary atmospheres are being assembled. These parameters include the types of data that have already been considered for transmission and radiance algorithms, such as line position, intensity, broadening coefficients, lower-state energies, and temperature dependence values. A number of new molecules, including H2, CS, C4H2, HC3N, and C2N2, are being incorporated into HITRAN, while several other molecules are pending. For some of the molecules, additional parameters, beyond those currently considered for the terrestrial atmosphere, are being archived. Examples are collision-broadened half widths due to various foreign partners, collision-induced absorption, and temperature dependence factors. Collision-induced absorption data for H2-H2, H2-N2, H2-He, H2-CH4, CH4-CH4, O2-CO2 and N2-CH4 were recently released. Partition sums that are necessary for applications at a wide range of temperatures have also been calculated for a variety of molecules of planetary interest, and form an integral part of the HITRAN compilation.

  8. The PEDANT genome database.

    PubMed

    Frishman, Dmitrij; Mokrejs, Martin; Kosykh, Denis; Kastenmüller, Gabi; Kolesov, Grigory; Zubrzycki, Igor; Gruber, Christian; Geier, Birgitta; Kaps, Andreas; Albermann, Kaj; Volz, Andreas; Wagner, Christian; Fellenberg, Matthias; Heumann, Klaus; Mewes, Hans-Werner

    2003-01-01

    The PEDANT genome database (http://pedant.gsf.de) provides exhaustive automatic analysis of genomic sequences by a large variety of established bioinformatics tools through a comprehensive Web-based user interface. One hundred and seventy seven completely sequenced and unfinished genomes have been processed so far, including large eukaryotic genomes (mouse, human) published recently. In this contribution, we describe the current status of the PEDANT database and novel analytical features added to the PEDANT server in 2002. Those include: (i) integration with the BioRS data retrieval system which allows fast text queries, (ii) pre-computed sequence clusters in each complete genome, (iii) a comprehensive set of tools for genome comparison, including genome comparison tables and protein function prediction based on genomic context, and (iv) computation and visualization of protein-protein interaction (PPI) networks based on experimental data. The availability of functional and structural predictions for 650 000 genomic proteins in well organized form makes PEDANT a useful resource for both functional and structural genomics. PMID:12519983

  9. The PEDANT genome database

    PubMed Central

    Frishman, Dmitrij; Mokrejs, Martin; Kosykh, Denis; Kastenmüller, Gabi; Kolesov, Grigory; Zubrzycki, Igor; Gruber, Christian; Geier, Birgitta; Kaps, Andreas; Albermann, Kaj; Volz, Andreas; Wagner, Christian; Fellenberg, Matthias; Heumann, Klaus; Mewes, Hans-Werner

    2003-01-01

    The PEDANT genome database (http://pedant.gsf.de) provides exhaustive automatic analysis of genomic sequences by a large variety of established bioinformatics tools through a comprehensive Web-based user interface. One hundred and seventy seven completely sequenced and unfinished genomes have been processed so far, including large eukaryotic genomes (mouse, human) published recently. In this contribution, we describe the current status of the PEDANT database and novel analytical features added to the PEDANT server in 2002. Those include: (i) integration with the BioRS™ data retrieval system which allows fast text queries, (ii) pre-computed sequence clusters in each complete genome, (iii) a comprehensive set of tools for genome comparison, including genome comparison tables and protein function prediction based on genomic context, and (iv) computation and visualization of protein–protein interaction (PPI) networks based on experimental data. The availability of functional and structural predictions for 650 000 genomic proteins in well organized form makes PEDANT a useful resource for both functional and structural genomics. PMID:12519983

  10. The Molecule Pages database

    PubMed Central

    Saunders, Brian; Lyon, Stephen; Day, Matthew; Riley, Brenda; Chenette, Emily; Subramaniam, Shankar

    2008-01-01

    The UCSD-Nature Signaling Gateway Molecule Pages (http://www.signaling-gateway.org/molecule) provides essential information on more than 3800 mammalian proteins involved in cellular signaling. The Molecule Pages contain expert-authored and peer-reviewed information based on the published literature, complemented by regularly updated information derived from public data source references and sequence analysis. The expert-authored data includes both a full-text review about the molecule, with citations, and highly structured data for bioinformatics interrogation, including information on protein interactions and states, transitions between states and protein function. The expert-authored pages are anonymously peer reviewed by the Nature Publishing Group. The Molecule Pages data is present in an object-relational database format and is freely accessible to the authors, the reviewers and the public from a web browser that serves as a presentation layer. The Molecule Pages are supported by several applications that along with the database and the interfaces form a multi-tier architecture. The Molecule Pages and the Signaling Gateway are routinely accessed by a very large research community. PMID:17965093

  11. PHENIX RPC Production Database

    NASA Astrophysics Data System (ADS)

    Jones, Timothy

    2008-10-01

    The Pioneering High Energy Nuclear Interaction eXperiment (PHENIX) is located on the Relativistic Heavy Ion Collider (RHIC) ring at Brookhaven National Laboratory. A primary physics goal that can be studied by PHENIX is the origin of the proton spin. One of the types of rare events looked for in the moun arms at PHENIX are single high transverse momentum mouns, which tend to result from the decay of a W bozon. Resistive Plate Chambers (RPCs) will be used as a level 1 trigger to select these events from a large background of low transverse momentum muons. As these RPCs are assembled it is necessary to keep track of the individual parts of each RPC as well as data from various quality assurance tests in a way that will allow the information to be easily accessible years to come as the RPCs are being used. This is done through the use of a database and web page interface that can be used to enter data about the RPCs or to look up information from tests. I will be presenting on how we keep track of the RPCs, their parts, and data from quality assurance tests as they are being assembled as well as how we can retrieve this data after it has been stored in the database.

  12. The ITPA disruption database

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Gerhardt, S. P.; Granetz, R. S.; Kawano, Y.; Lehnen, M.; Lister, J. B.; Pautasso, G.; Riccardo, V.; Tanna, R. L.; Thornton, A. J.; ITPA Disruption Database Participants, The

    2015-06-01

    A multi-device database of disruption characteristics has been developed under the auspices of the International Tokamak Physics Activity magneto-hydrodynamics topical group. The purpose of this ITPA disruption database (IDDB) is to find the commonalities between the disruption and disruption mitigation characteristics in a wide variety of tokamaks in order to elucidate the physics underlying tokamak disruptions and to extrapolate toward much larger devices, such as ITER and future burning plasma devices. In contrast to previous smaller disruption data collation efforts, the IDDB aims to provide significant context for each shot provided, allowing exploration of a wide array of relationships between pre-disruption and disruption parameters. The IDDB presently includes contributions from nine tokamaks, including both conventional aspect ratio and spherical tokamaks. An initial parametric analysis of the available data is presented. This analysis includes current quench rates, halo current fraction and peaking, and the effectiveness of massive impurity injection. The IDDB is publicly available, with instruction for access provided herein.

  13. World Ocean Database (Invited)

    NASA Astrophysics Data System (ADS)

    Levitus, S.

    2009-12-01

    The World Ocean Database (WOD) is the largest collection of ocean profile data available internationally without restriction. WOD is produced by the NOAA National Oceanographic Data Center and its co-located World Data Center for Oceanography. The database contains data for temperature, salinity, oxygen, nutrients, tracers, among other variables. The WOD can be considered to be a collection of CDRs that are all in one common format, with systematic quality-control applied and with all available metadata and documentation made available online and DVD to make WOD useful to users. The amount of data in the WOD has grown substantially since WOD was started in 1994 for two reasons. One is the increase of data available from in situ remote-sensing systems such as moored buoys, from ship-of-opportunity programs collecting XBT data, from the Argo profiling float project, and other observing system projects. The second reason is due to the success of the Global Oceanographic Data Archaeology and Rescue (GODAR) project sponsored by the Intergovernmental Oceanographic Commission. For example this project located and rescued approximately 3 million temperature profiles for the pre-1991 period which have been added to WOD. The data in WOD have been used to make estimate of the interannual to interdecadal variability of temperature, salinity, and oxygen for the past 50 years. This talk will describe WOD and results from the GODAR project.

  14. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  15. Curcumin Resource Database.

    PubMed

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. PMID:26220923

  16. Instruction manual for the Wahoo computerized database

    SciTech Connect

    Lasota, D.; Watts, K.

    1995-05-01

    As part of our research on the Lisburne Group, we have developed a powerful relational computerized database to accommodate the huge amounts of data generated by our multi-disciplinary research project. The Wahoo database has data files on petrographic data, conodont analyses, locality and sample data, well logs and diagenetic (cement) studies. Chapter 5 is essentially an instruction manual that summarizes some of the unique attributes and operating procedures of the Wahoo database. The main purpose of a database is to allow users to manipulate their data and produce reports and graphs for presentation. We present a variety of data tables in appendices at the end of this report, each encapsulating a small part of the data contained in the Wahoo database. All the data are sorted and listed by map index number and stratigraphic position (depth). The Locality data table (Appendix A) lists of the stratigraphic sections examined in our study. It gives names of study areas, stratigraphic units studied, locality information, and researchers. Most localities are keyed to a geologic map that shows the distribution of the Lisburne Group and location of our sections in ANWR. Petrographic reports (Appendix B) are detailed summaries of data the composition and texture of the Lisburne Group carbonates. The relative abundance of different carbonate grains (allochems) and carbonate texture are listed using symbols that portray data in a format similar to stratigraphic columns. This enables researchers to recognize trends in the evolution of the Lisburne carbonate platform and to check their paleoenvironmental interpretations in a stratigraphic context. Some of the figures in Chapter 1 were made using the Wahoo database.

  17. Oracle Database DBFS Hierarchical Storage Overview

    SciTech Connect

    Rivenes, A

    2011-07-25

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory creates large numbers of images during each shot cycle for the analysis of optics, target inspection and target diagnostics. These images must be readily accessible once they are created and available for the 30 year lifetime of the facility. The Livermore Computing Center (LC) runs a High Performance Storage System (HPSS) that is capable of storing NIF's estimated 1 petabyte of diagnostic images at a fraction of what it would cost NIF to operate its own automated tape library. With Oracle 11g Release 2 database, it is now possible to create an application transparent, hierarchical storage system using the LC's HPSS. Using the Oracle DBMS-LOB and DBMS-DBFS-HS packages a SecureFile LOB can now be archived to storage outside of the database and accessed seamlessly through a DBFS 'link'. NIF has chosen to use this technology to implement a hierarchical store for its image based SecureFile LOBs. Using a modified external store and DBFS links, files are written to and read from a disk 'staging area' using Oracle's backup utility. Database external procedure calls invoke OS based scripts to manage a staging area and the transfer of the backup files between the staging area and the Lab's HPSS.

  18. Information Management Tools for Classrooms: Exploring Database Management Systems. Technical Report No. 28.

    ERIC Educational Resources Information Center

    Freeman, Carla; And Others

    In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…

  19. MetaBase—the wiki-database of biological databases

    PubMed Central

    Bolser, Dan M.; Chibon, Pierre-Yves; Palopoli, Nicolas; Gong, Sungsam; Jacob, Daniel; Angel, Victoria Dominguez Del; Swan, Dan; Bassi, Sebastian; González, Virginia; Suravajhala, Prashanth; Hwang, Seungwoo; Romano, Paolo; Edwards, Rob; Bishop, Bryan; Eargle, John; Shtatland, Timur; Provart, Nicholas J.; Clements, Dave; Renfro, Daniel P.; Bhak, Daeui; Bhak, Jong

    2012-01-01

    Biology is generating more data than ever. As a result, there is an ever increasing number of publicly available databases that analyse, integrate and summarize the available data, providing an invaluable resource for the biological community. As this trend continues, there is a pressing need to organize, catalogue and rate these resources, so that the information they contain can be most effectively exploited. MetaBase (MB) (http://MetaDatabase.Org) is a community-curated database containing more than 2000 commonly used biological databases. Each entry is structured using templates and can carry various user comments and annotations. Entries can be searched, listed, browsed or queried. The database was created using the same MediaWiki technology that powers Wikipedia, allowing users to contribute on many different levels. The initial release of MB was derived from the content of the 2007 Nucleic Acids Research (NAR) Database Issue. Since then, approximately 100 databases have been manually collected from the literature, and users have added information for over 240 databases. MB is synchronized annually with the static Molecular Biology Database Collection provided by NAR. To date, there have been 19 significant contributors to the project; each one is listed as an author here to highlight the community aspect of the project. PMID:22139927

  20. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  1. Mouse Phenome Database

    PubMed Central

    Grubb, Stephen C.; Bult, Carol J.; Bogue, Molly A.

    2014-01-01

    The Mouse Phenome Database (MPD; phenome.jax.org) was launched in 2001 as the data coordination center for the international Mouse Phenome Project. MPD integrates quantitative phenotype, gene expression and genotype data into a common annotated framework to facilitate query and analysis. MPD contains >3500 phenotype measurements or traits relevant to human health, including cancer, aging, cardiovascular disorders, obesity, infectious disease susceptibility, blood disorders, neurosensory disorders, drug addiction and toxicity. Since our 2012 NAR report, we have added >70 new data sets, including data from Collaborative Cross lines and Diversity Outbred mice. During this time we have completely revamped our homepage, improved search and navigational aspects of the MPD application, developed several web-enabled data analysis and visualization tools, annotated phenotype data to public ontologies, developed an ontology browser and released new single nucleotide polymorphism query functionality with much higher density coverage than before. Here, we summarize recent data acquisitions and describe our latest improvements. PMID:24243846

  2. The CHIANTI atomic database

    NASA Astrophysics Data System (ADS)

    Young, P. R.; Dere, K. P.; Landi, E.; Del Zanna, G.; Mason, H. E.

    2016-04-01

    The freely available CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. It contains data and software for modeling optically thin atom and positive ion emission from low density (≲1013 cm-3) plasmas from x-ray to infrared wavelengths. A key feature is that the data are assessed and regularly updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  3. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  4. Database and Related Activities in Japan

    SciTech Connect

    Murakami, Izumi; Kato, Daiji; Kato, Masatoshi; Sakaue, Hiroyuki A.; Kato, Takako; Ding, Xiaobin; Morita, Shigeru; Kitajima, Masashi; Koike, Fumihiro; Nakamura, Nobuyuki; Sakamoto, Naoki; Sasaki, Akira; Skobelev, Igor; Ulantsev, Artemiy; Watanabe, Tetsuya; Yamamoto, Norimasa

    2011-05-11

    We have constructed and made available atomic and molecular (AM) numerical databases on collision processes such as electron-impact excitation and ionization, recombination and charge transfer of atoms and molecules relevant for plasma physics, fusion research, astrophysics, applied-science plasma, and other related areas. The retrievable data is freely accessible via the internet. We also work on atomic data evaluation and constructing collisional-radiative models for spectroscopic plasma diagnostics. Recently we have worked on Fe ions and W ions theoretically and experimentally. The atomic data and collisional-radiative models for these ions are examined and applied to laboratory plasmas. A visible M1 transition of W{sup 26+} ion is identified at 389.41 nm by EBIT experiments and theoretical calculations. We have small non-retrievable databases in addition to our main database. Recently we evaluated photo-absorption cross sections for 9 atoms and 23 molecules and we present them as a new database. We established a new association ''Forum of Atomic and Molecular Data and Their Applications'' to exchange information among AM data producers, data providers and data users in Japan and we hope this will help to encourage AM data activities in Japan.

  5. BGDB: a database of bivalent genes.

    PubMed

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/ PMID:23894186

  6. Generative engineering databases - Toward expert systems

    NASA Technical Reports Server (NTRS)

    Rasdorf, W. J.; Salley, G. C.

    1985-01-01

    Engineering data management, incorporating concepts of optimization with data representation, is receiving increasing attention as the amount and complexity of information necessary for performing engineering operations increases and the need to coordinate its representation and use increases. Research in this area promises advantages for a wide variety of engineering applications, particularly those which seek to use data in innovative ways in the engineering process. This paper presents a framework for a comprehensive, relational database management system that combines a knowledge base of design constraints with a database of engineering data items in order to achieve a 'generative database' - one which automatically generates new engineering design data according to the design constraints stored in the knowledge base. The representation requires a database that is able to store all of the data normally associated with engineering design and to accurately represent the interactions between constraints and the stored data while guaranteeing its integrity. The representation also requires a knowledge base that is able to store all the constraints imposed upon the engineering design process.

  7. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  8. Intrusion Detection in Database Systems

    NASA Astrophysics Data System (ADS)

    Javidi, Mohammad M.; Sohrabi, Mina; Rafsanjani, Marjan Kuchaki

    Data represent today a valuable asset for organizations and companies and must be protected. Ensuring the security and privacy of data assets is a crucial and very difficult problem in our modern networked world. Despite the necessity of protecting information stored in database systems (DBS), existing security models are insufficient to prevent misuse, especially insider abuse by legitimate users. One mechanism to safeguard the information in these databases is to use an intrusion detection system (IDS). The purpose of Intrusion detection in database systems is to detect transactions that access data without permission. In this paper several database Intrusion detection approaches are evaluated.

  9. Relativistic quantum private database queries

    NASA Astrophysics Data System (ADS)

    Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou

    2015-04-01

    Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.

  10. Database usage and performance for the Fermilab Run II experiments

    SciTech Connect

    Bonham, D.; Box, D.; Gallas, E.; Guo, Y.; Jetton, R.; Kovich, S.; Kowalkowski, J.; Kumar, A.; Litvintsev, D.; Lueking, L.; Stanfield, N.; Trumbo, J.; Vittone-Wiersma, M.; White, S.P.; Wicklund, E.; Yasuda, T.; Maksimovic, P.; /Johns Hopkins U.

    2004-12-01

    The Run II experiments at Fermilab, CDF and D0, have extensive database needs covering many areas of their online and offline operations. Delivering data to users and processing farms worldwide has represented major challenges to both experiments. The range of applications employing databases includes, calibration (conditions), trigger information, run configuration, run quality, luminosity, data management, and others. Oracle is the primary database product being used for these applications at Fermilab and some of its advanced features have been employed, such as table partitioning and replication. There is also experience with open source database products such as MySQL for secondary databases used, for example, in monitoring. Tools employed for monitoring the operation and diagnosing problems are also described.

  11. Issues in object-oriented data-base schemas

    SciTech Connect

    Kim, H.J.

    1988-01-01

    The successful use of data-base management systems in data-processing applications has created a substantial amount of interest in applying data-base techniques to such areas as knowledge bases and artificial intelligence (AI), computer-aided design (CAD), and office information systems (OIS). The practical applications of object-oriented data bases, such as CAD, AI, and OIS require the ability to dynamically make a wide variety of changes to the data-base schema. This process is called schema evolution, for which the author establishes a consistent and complete framework. Based on his framework, the MCC ODBS group implemented a schema manager within the prototype object-oriented data-base system, ORION. On top of the schema manager of ORION, a graphical editor PSYCHO was implemented. A technique is presented that enables users to manipulate schema versions explicitly and maintain schema-evolution histories in object-oriented data-base environments.

  12. The German Landslide Database: A Tool to Analyze Infrastructure Exposure

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2015-04-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data over broad geographic areas and for different types of critical infrastructures was thus widely exceptional up until today. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this database project that is focused on the development of a comprehensive pool of landslide data for systematic analysis of landslide hazard impacts in Germany. Major purpose of the database is to store and provide detailed scientific data on all types of landslides affecting critical infrastructures (transportation systems, industrial facilities, etc.) and urban areas. The database evolved over the last 15 years to a database covering large parts of Germany and offers a collection of data sets for more than 4,200 landslides with over 13,000 single data files. Data collection is based on a bottom-up approach that involves in-depth archive works and acquisition of data by close collaboration with infrastructure agencies and municipal offices. This enables to develop a database that stores geospatial landslide information and detailed data sets on landslide causes and impacts as well as hazard mitigation. The database is currently migrated to a spatial database system in PostgreSQL/PostGIS. This contribution gives an overview of the database content and its application in landslide impact research. It deals with the underlying strategy of data collection and presents the types of data and their quality to perform damage statistics and analyses of infrastructure exposure. The contribution refers to different case studies and regional investigations in the German Central Uplands.

  13. The Protein Ensemble Database.

    PubMed

    Varadi, Mihaly; Tompa, Peter

    2015-01-01

    The scientific community's major conceptual notion of structural biology has recently shifted in emphasis from the classical structure-function paradigm due to the emergence of intrinsically disordered proteins (IDPs). As opposed to their folded cousins, these proteins are defined by the lack of a stable 3D fold and a high degree of inherent structural heterogeneity that is closely tied to their function. Due to their flexible nature, solution techniques such as small-angle X-ray scattering (SAXS), nuclear magnetic resonance (NMR) spectroscopy and fluorescence resonance energy transfer (FRET) are particularly well-suited for characterizing their biophysical properties. Computationally derived structural ensembles based on such experimental measurements provide models of the conformational sampling displayed by these proteins, and they may offer valuable insights into the functional consequences of inherent flexibility. The Protein Ensemble Database (http://pedb.vib.be) is the first openly accessible, manually curated online resource storing the ensemble models, protocols used during the calculation procedure, and underlying primary experimental data derived from SAXS and/or NMR measurements. By making this previously inaccessible data freely available to researchers, this novel resource is expected to promote the development of more advanced modelling methodologies, facilitate the design of standardized calculation protocols, and consequently lead to a better understanding of how function arises from the disordered state. PMID:26387108

  14. The Mars Observer database

    NASA Technical Reports Server (NTRS)

    Albee, Arden L.

    1988-01-01

    Mars Observer will study the surface, atmosphere, and climate of Mars in a systematic way over an entire Martian year. The observations of the surface will provide a database that will be invaluable to the planning of a future Mars sample return mission. Mars Observer is planned for a September 1992 launch from the Space Shuttle, using an upper-stage. After the one year transit the spacecraft is injected into orbit about Mars and the orbit adjusted to a near-circular, sun-synchronous low-altitude, polar orbit. During the Martian year in this mapping orbit the instruments gather both geoscience data and climatological data by repetitive global mapping. The scientific objectives of the mission are to: (1) determine the global elemental and mineralogical character of the surface material; (2) define globally the topography and gravitational field; (3) establish the nature of the magnetic field; (4) determine the time and space distribution, abundance, sources, and sinks of volatile material and dust over a seasonal cycle; and (5) explore the structure and aspects of the circulation of the atmosphere. The science investigations and instruments for Mars Observer have been chosen with these objectives in mind. These instruments, the principal investigator or team leader and the objectives are discussed.

  15. Natural-language access to databases-theoretical/technical issues

    SciTech Connect

    Moore, R.C.

    1982-01-01

    Although there have been many experimental systems for natural-language access to databases, with some now going into actual use, many problems in this area remain to be solved. The author presents descriptions of five problem areas that seem to me not to be adequately handled by any existing system.

  16. CERN database services for the LHC computing grid

    NASA Astrophysics Data System (ADS)

    Girone, M.

    2008-07-01

    Physics meta-data stored in relational databases play a crucial role in the Large Hadron Collider (LHC) experiments and also in the operation of the Worldwide LHC Computing Grid (WLCG) services. A large proportion of non-event data such as detector conditions, calibration, geometry and production bookkeeping relies heavily on databases. Also, the core Grid services that catalogue and distribute LHC data cannot operate without a reliable database infrastructure at CERN and elsewhere. The Physics Services and Support group at CERN provides database services for the physics community. With an installed base of several TB-sized database clusters, the service is designed to accommodate growth for data processing generated by the LHC experiments and LCG services. During the last year, the physics database services went through a major preparation phase for LHC start-up and are now fully based on Oracle clusters on Intel/Linux. Over 100 database server nodes are deployed today in some 15 clusters serving almost 2 million database sessions per week. This paper will detail the architecture currently deployed in production and the results achieved in the areas of high availability, consolidation and scalability. Service evolution plans for the LHC start-up will also be discussed.

  17. Database Quality: Label or Liable.

    ERIC Educational Resources Information Center

    Armstrong, C. J.

    The Centre for Information Quality Management (CIQM) was set up by the Library Association and UK (United Kingdom) Online User Group to act as a clearinghouse to which database users may report problems relating to the quality of any aspect of a database being used. CIQM acts as an intermediary between the user and information provider in…

  18. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content

  19. SPINAL CORD INJURY (SCI) DATABASE

    EPA Science Inventory

    The National Spinal Cord Injury Database has been in existence since 1973 and captures data from SCI cases in the United States. Since its inception, 24 federally funded Model SCI Care Systems have contributed data to the National SCI Database. Statistics are derived from this da...

  20. The ENZYME database in 2000.

    PubMed

    Bairoch, A

    2000-01-01

    The ENZYME database is a repository of information related to the nomenclature of enzymes. In recent years it has became an indispensable resource for the development of metabolic databases. The current version contains information on 3705 enzymes. It is available through the ExPASy WWW server (http://www.expasy.ch/enzyme/ ). PMID:10592255

  1. The ENZYME database in 2000

    PubMed Central

    Bairoch, Amos

    2000-01-01

    The ENZYME database is a repository of information related to the nomenclature of enzymes. In recent years it has became an indispensable resource for the development of metabolic databases. The current version contains information on 3705 enzymes. It is available through the ExPASy WWW server (http://www. expasy.ch/enzyme/ ). PMID:10592255

  2. XCOM: Photon Cross Sections Database

    National Institute of Standards and Technology Data Gateway

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  3. The EUVE satellite survey database

    NASA Technical Reports Server (NTRS)

    Craig, N.; Chen, T.; Hawkins, I.; Fruscione, A.

    1993-01-01

    The EUVE survey database contains fundamental science data for 9000 potential source locations (pigeonholes) in the sky. The first release of the Bright Source List is now available to the public through an interface with the NASA Astrophysical Data System. We describe the database schema design and the EUVE source categorization algorithm that compares sources to the ROSAT Wide Field Camera source list.

  4. Hanford Site technical baseline database

    SciTech Connect

    Porter, P.E., Westinghouse Hanford

    1996-05-10

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of May 10, 1996. The cassette tape also includes the delta files that delineate the differences between this revision and revision 3 (April 10, 1996) of the Hanford Site Technical Baseline Database.

  5. Database Proliferation: Implications for Librarians.

    ERIC Educational Resources Information Center

    Nichol, Kathleen M.

    1983-01-01

    Discusses problems of increasing numbers of databases (standardization, vendor contracts, training, restricted access, duplicate citations), noting implications for change in librarian's role as supplier of decision making data (education, users groups, search aids, information brokers, collection evaluation, reference and source databases,

  6. The Student-Designed Database.

    ERIC Educational Resources Information Center

    Thomas, Rick

    1988-01-01

    This discussion of the design of data files for databases to be created by secondary school students uses AppleWorks software as an example. Steps needed to create and use a database are explained, the benefits of group activity are described, and other possible projects are listed. (LRW)

  7. DRUG ENFORCEMENT ADMINISTRATION REGISTRATION DATABASE

    EPA Science Inventory

    The Drug Enforcement Administration (DEA), as part of its efforts to control the abuse and misuse of controlled substances and chemicals used in producing some over-the-counter drugs, maintains databases of individuals registered to handle these substances. These databases are av...

  8. Database Licensing: A Future View.

    ERIC Educational Resources Information Center

    Flanagan, Michael

    1993-01-01

    Access to database information in libraries will increase as licenses for tape loading of data onto public access catalogs becomes more widespread. Institutions with adequate storage capacity will have full text databases, and the adoption of the Z39.50 standard, which allows differing computer systems to interface with each other, will increase…

  9. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  10. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…

  11. Rocky Mountain Basins Produced Water Database

    DOE Data Explorer

    Historical records for produced water data were collected from multiple sources, including Amoco, British Petroleum, Anadarko Petroleum Corporation, United States Geological Survey (USGS), Wyoming Oil and Gas Commission (WOGC), Denver Earth Resources Library (DERL), Bill Barrett Corporation, Stone Energy, and other operators. In addition, 86 new samples were collected during the summers of 2003 and 2004 from the following areas: Waltman-Cave Gulch, Pinedale, Tablerock and Wild Rose. Samples were tested for standard seven component "Stiff analyses", and strontium and oxygen isotopes. 16,035 analyses were winnowed to 8028 unique records for 3276 wells after a data screening process was completed. [Copied from the Readme document in the zipped file available at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the Zipped file to your PC. When opened, it will contain four versions of the database: ACCESS, EXCEL, DBF, and CSV formats. The information consists of detailed water analyses from basins in the Rocky Mountain region.

  12. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani S.

    1992-01-01

    In June 1991, a paper at the fifteenth NASA Propagation Experimenters Meeting (NAPEX 15) was presented outlining the development of a database for propagation models. The database is designed to allow the scientists and experimenters in the propagation field to process their data through any known and accepted propagation model. The architecture of the database also incorporates the possibility of changing the standard models in the database to fit the scientist's or the experimenter's needs. The database not only provides powerful software to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables, and producing impressive and crisp hard copy for presentation and filing.

  13. The magnet components database system

    SciTech Connect

    Baggett, M.J. ); Leedy, R.; Saltmarsh, C.; Tompkins, J.C. )

    1990-01-01

    The philosophy, structure, and usage MagCom, the SSC magnet components database, are described. The database has been implemented in Sybase (a powerful relational database management system) on a UNIX-based workstation at the Superconducting Super Collider Laboratory (SSCL); magnet project collaborators can access the database via network connections. The database was designed to contain the specifications and measured values of important properties for major materials, plus configuration information (specifying which individual items were used in each cable, coil, and magnet) and the test results on completed magnets. These data will facilitate the tracking and control of the production process as well as the correlation of magnet performance with the properties of its constituents. 3 refs., 10 figs.

  14. CONNECTICUT NATURAL DIVERSITY DATABASE

    EPA Science Inventory

    This is a statewide datalayer at 1:24,000 scale of general areas of concern with regards to state and federally listed Endangered, Threatened, and Special Concern species and significant natural communities. Locations of species and natural communities are based on data collecte...

  15. Database on unstable rock slopes in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Several large rockslides have occurred in historic times in Norway causing many casualties. Most of these casualties are due to displacement waves triggered by a rock avalanche and affecting coast lines of entire lakes and fjords. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected up to now more than 230 unstable slopes with significant postglacial deformation. This systematic mapping aims to detect future rock avalanches before they occur. The registered unstable rock slopes are stored in a database on unstable rock slopes developed and maintained by the Geological Survey of Norway. The main aims of this database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, hazard and risk classification and so on. Feature classes and tables linked to the main feature class include the run-out area, the area effected by secondary effects, the hazard and risk classification, subareas and scenarios of an unstable rock slope, field observation points, displacement measurement stations, URL links for further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through the online map service on www.skrednett.no in 2014. Only publicly relevant parts of the database will be shown in the online map service (e.g. processed results of displacement measurements), while more detailed data will not (e.g. raw data of displacement measurements). Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site, a municipality, a county or the entire country. Selected data will also be downloadable free of charge. The present database on unstable rock slopes in Norway will further evolve in the coming years as the systematic mapping conducted by the Geological Survey of Norway progresses and as available techniques and tools evolve.

  16. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is first reduced by performing a binary search of the boundary suffixes of the smaller set into the larger set, which defines the limits of the zipper-based intersection process. Rather than recording the actual k-mers of the intersection, another data structure of identical "shape" is created which consists of only bit vectors so that only a 1 or 0 will be stored in the location of the k-mer suffix that was found in the intersection. This reduces the amount of data generated and stored considerably.During the Subtraction phase, relevant intersection bitmasks are first unionized together to form a single bitmask which is then applied over the original genome to reveal only those regions of the genome that are unique. These regions are then exported to disk in FASTA format and used in the application of determining the constituents of an unknown metagenomic community.The DATABASE provided is the result of the algorithm described.

  17. GOTTCHA Database, Version 1

    Energy Science and Technology Software Center (ESTSC)

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discoverymore » rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is first reduced by performing a binary search of the boundary suffixes of the smaller set into the larger set, which defines the limits of the zipper-based intersection process. Rather than recording the actual k-mers of the intersection, another data structure of identical "shape" is created which consists of only bit vectors so that only a 1 or 0 will be stored in the location of the k-mer suffix that was found in the intersection. This reduces the amount of data generated and stored considerably.During the Subtraction phase, relevant intersection bitmasks are first unionized together to form a single bitmask which is then applied over the original genome to reveal only those regions of the genome that are unique. These regions are then exported to disk in FASTA format and used in the application of determining the constituents of an unknown metagenomic community.The DATABASE provided is the result of the algorithm described.« less

  18. The EMBL Nucleotide Sequence Database.

    PubMed

    Stoesser, Guenter; Baker, Wendy; van den Broek, Alexandra; Camon, Evelyn; Garcia-Pastor, Maria; Kanz, Carola; Kulikova, Tamara; Leinonen, Rasko; Lin, Quan; Lombard, Vincent; Lopez, Rodrigo; Redaschi, Nicole; Stoehr, Peter; Tuli, Mary Ann; Tzouvara, Katerina; Vaughan, Robert

    2002-01-01

    The EMBL Nucleotide Sequence Database (aka EMBL-Bank; http://www.ebi.ac.uk/embl/) incorporates, organises and distributes nucleotide sequences from all available public sources. EMBL-Bank is located and maintained at the European Bioinformatics Institute (EBI) near Cambridge, UK. In an international collaboration with DDBJ (Japan) and GenBank (USA), data are exchanged amongst the collaborating databases on a daily basis. Major contributors to the EMBL database are individual scientists and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via FTP, email and World Wide Web interfaces. EBI's Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many other specialized databases. For sequence similarity searching, a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT. All resources can be accessed via the EBI home page at http://www.ebi.ac.uk. PMID:11752244

  19. Population databases in development analysis.

    PubMed

    Chamie, J

    1994-01-01

    Population databases are very important in formulating analyses of social and economic change and development. Since such analyses are often the basis for policy making and program formulation, it is important to have a sound understanding of their strengths and limitations. This paper focuses upon databases which deal with population size, life expectancy at birth, and infant mortality. Considerable progress has been made in producing population databases over the last several decades, but many problems remain with regard to their comparability, completeness of coverage, and accuracy. Governmental and political circumstances greatly influence the availability and quality of population databases. Globally, the comparability of data remains a serious concern due to deviations from standard definitions. The completeness of coverage of databases among less developed countries varies widely by region, while the data for preparing estimates and assessing demographic trends are deficient and problematic. Technological advances and the repackaging of population databases have greatly advanced their production and availability, but confusion and ignorance have become widespread regarding the original source and nature of the data. Database users therefore too often undertake faulty analyses which lead to false conclusions. PMID:12290678

  20. Database of Properties of Meteors

    NASA Technical Reports Server (NTRS)

    Suggs, Rob; Anthea, Coster

    2006-01-01

    A database of properties of meteors, and software that provides access to the database, are being developed as a contribution to continuing efforts to model the characteristics of meteors with increasing accuracy. Such modeling is necessary for evaluation of the risk of penetration of spacecraft by meteors. For each meteor in the database, the record will include an identification, date and time, radiant properties, ballistic coefficient, radar cross section, size, density, and orbital elements. The property of primary interest in the present case is density, and one of the primary goals in this case is to derive densities of meteors from their atmospheric decelerations. The database and software are expected to be valid anywhere in the solar system. The database will incorporate new data plus results of meteoroid analyses that, heretofore, have not been readily available to the aerospace community. Taken together, the database and software constitute a model that is expected to provide improved estimates of densities and to result in improved risk analyses for interplanetary spacecraft. It is planned to distribute the database and software on a compact disk.

  1. Unifying Memory and Database Transactions

    NASA Astrophysics Data System (ADS)

    Dias, Ricardo J.; Lourenço, João M.

    Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

  2. GLOBAL ECOSYSTEMS DATABASE: DATABASE DOCUMENTATION AND CD-ROM

    EPA Science Inventory

    The primary objective of this cooperative research and development is to produce an integrated, quality controlled, global database (including time sequences) for spatially distributed modeling. he project concentrates on modern observational data, including remotely sensed data ...

  3. Databases of the marine metagenomics.

    PubMed

    Mineta, Katsuhiko; Gojobori, Takashi

    2016-02-01

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database. PMID:26518717

  4. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  5. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);

  6. Cloudsat tropical cyclone database

    NASA Astrophysics Data System (ADS)

    Tourville, Natalie D.

    CloudSat (CS), the first 94 GHz spaceborne cloud profiling radar (CPR), launched in 2006 to study the vertical distribution of clouds. Not only are CS observations revealing inner vertical cloud details of water and ice globally but CS overpasses of tropical cyclones (TC's) are providing a new and exciting opportunity to study the vertical structure of these storm systems. CS TC observations are providing first time vertical views of TC's and demonstrate a unique way to observe TC structure remotely from space. Since December 2009, CS has intersected every globally named TC (within 1000 km of storm center) for a total of 5,278 unique overpasses of tropical systems (disturbance, tropical depression, tropical storm and hurricane/typhoon/cyclone (HTC)). In conjunction with the Naval Research Laboratory (NRL), each CS TC overpass is processed into a data file containing observational data from the afternoon constellation of satellites (A-TRAIN), Navy's Operational Global Atmospheric Prediction System Model (NOGAPS), European Center for Medium range Weather Forecasting (ECMWF) model and best track storm data. This study will describe the components and statistics of the CS TC database, present case studies of CS TC overpasses with complementary A-TRAIN observations and compare average reflectivity stratifications of TC's across different atmospheric regimes (wind shear, SST, latitude, maximum wind speed and basin). Average reflectivity stratifications reveal that characteristics in each basin vary from year to year and are dependent upon eye overpasses of HTC strength storms and ENSO phase. West Pacific (WPAC) basin storms are generally larger in size (horizontally and vertically) and have greater values of reflectivity at a predefined height than all other basins. Storm structure at higher latitudes expands horizontally. Higher vertical wind shear (≥ 9.5 m/s) reduces cloud top height (CTH) and the intensity of precipitation cores, especially in HTC strength storms. Average zero and ten dBZ height thresholds confirm WPAC storms loft precipitation sized particles higher into the atmosphere than in other basins. Two CS eye overpasses (32 hours apart) of a weakening Typhoon Nida in 2009 reveal the collapse of precipitation cores, warm core anomaly and upper tropospheric ice water content (IWC) under steady moderate shear conditions.

  7. The Berlin Emissivity Database

    NASA Astrophysics Data System (ADS)

    Helbert, Jorn

    Remote sensing infrared spectroscopy is the principal field of investigation for planetary surfaces composition. Past, present and future missions to the solar system bodies include in their payload instruments measuring the emerging radiation in the infrared range. TES on Mars Global Surveyor and THEMIS on Mars Odyssey have in many ways changed our views of Mars. The PFS instrument on the ESA Mars Express mission has collected spectra since the beginning of 2004. In spring 2006 the VIRTIS experiment started its operation on the ESA Venus Express mission, allowing for the first time to map the surface of Venus using the 1 µm emission from the surface. The MERTIS spectrometer is included in the payload of the ESA BepiColombo mission to Mercury, scheduled for 2013. For the interpretation of the measured data an emissivity spectral library of planetary analogue materials is needed. The Berlin Emissivity Database (BED) presented here is focused on relatively fine-grained size separates, providing a realistic basis for interpretation of thermal emission spectra of planetary regoliths. The BED is therefore complimentary to existing thermal emission libraries, like the ASU library for example. The BED contains currently entries for plagioclase and potassium feldspars, low Ca and high Ca pyroxenes, olivine, elemental sulphur, common martian analogues (JSC Mars-1, Salten Skov, palagonites, montmorillonite) and a lunar highland soil sample measured in the wavelength range from 3 to 50 µm as a function of particle size. For each sample, the spectra of four well defined particle size separates (¡25 µm , 25-63 µm, 63-125 µm, 125-250 µm) are measured with a 4 cm-1 spectral resolution. These size separates have been selected as typical representations for most of the planetary surfaces. Following an ongoing upgrade of the Planetary Emmissivity Laboratory (PEL) at DLR in Berlin measurements can be obtained at temperatures up to 500° C - realistic for the dayside conditions of Mercury. This upgrade will also extend the spectral coverage down to 1 µm. The capability to measure emissivity for fine grained samples on a large spectral range and for very high temperatures makes the PEL a unique facility. It allows for example for the first time obtaining directly emissivity measurements in this spectral range which is crucial for example for the surface observations of Venus from orbit through the atmospheric windows.

  8. International forensic automotive paint database

    NASA Astrophysics Data System (ADS)

    Bishea, Gregory A.; Buckle, Joe L.; Ryland, Scott G.

    1999-02-01

    The Technical Working Group for Materials Analysis (TWGMAT) is supporting an international forensic automotive paint database. The Federal Bureau of Investigation and the Royal Canadian Mounted Police (RCMP) are collaborating on this effort through TWGMAT. This paper outlines the support and further development of the RCMP's Automotive Paint Database, `Paint Data Query'. This cooperative agreement augments and supports a current, validated, searchable, automotive paint database that is used to identify make(s), model(s), and year(s) of questioned paint samples in hit-and-run fatalities and other associated investigations involving automotive paint.

  9. Biological Databases for Human Research

    PubMed Central

    Zou, Dong; Ma, Lina; Yu, Jun; Zhang, Zhang

    2015-01-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation. PMID:25712261

  10. Database of recent tsunami deposits

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce E.

    2010-01-01

    This report describes a database of sedimentary characteristics of tsunami deposits derived from published accounts of tsunami deposit investigations conducted shortly after the occurrence of a tsunami. The database contains 228 entries, each entry containing data from up to 71 categories. It includes data from 51 publications covering 15 tsunamis distributed between 16 countries. The database encompasses a wide range of depositional settings including tropical islands, beaches, coastal plains, river banks, agricultural fields, and urban environments. It includes data from both local tsunamis and teletsunamis. The data are valuable for interpreting prehistorical, historical, and modern tsunami deposits, and for the development of criteria to identify tsunami deposits in the geologic record.

  11. NUCLEAR DATABASES FOR REACTOR APPLICATIONS.

    SciTech Connect

    PRITYCHENKO, B.; ARCILLA, R.; BURROWS, T.; HERMAN, M.W.; MUGHABGHAB, S.; OBLOZINSKY, P.; ROCHMAN, D.; SONZOGNI, A.A.; TULI, J.; WINCHELL, D.F.

    2006-06-05

    The National Nuclear Data Center (NNDC): An overview of nuclear databases, related products, nuclear data Web services and publications. The NNDC collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. The NNDC maintains and contributes to the nuclear reaction (ENDF, CSISRS) and nuclear structure databases along with several others databases (CapGam, MIRD, IRDF-2002) and provides coordination for the Cross Section Evaluation Working Group (CSEWG) and the US Nuclear Data Program (USNDP). The Center produces several publications and codes such as Atlas of Neutron Resonances, Nuclear Wallet Cards booklets and develops codes, such as nuclear reaction model code Empire.

  12. Mouse Resource Browser—a database of mouse databases

    PubMed Central

    Zouberakis, Michael; Chandras, Christina; Swertz, Morris; Smedley, Damian; Gruenberger, Michael; Bard, Jonathan; Schughart, Klaus; Rosenthal, Nadia; Hancock, John M.; Schofield, Paul N.; Kollias, George; Aidinis, Vassilis

    2010-01-01

    The laboratory mouse has become the organism of choice for discovering gene function and unravelling pathogenetic mechanisms of human diseases through the application of various functional genomic approaches. The resulting deluge of data has led to the deployment of numerous online resources and the concomitant need for formalized experimental descriptions, data standardization, database interoperability and integration, a need that has yet to be met. We present here the Mouse Resource Browser (MRB), a database of mouse databases that indexes 217 publicly available mouse resources under 22 categories and uses a standardised database description framework (the CASIMIR DDF) to provide information on their controlled vocabularies (ontologies and minimum information standards), and technical information on programmatic access and data availability. Focusing on interoperability and integration, MRB offers automatic generation of downloadable and re-distributable SOAP application-programming interfaces for resources that provide direct database access. MRB aims to provide useful information to both bench scientists, who can easily navigate and find all mouse related resources in one place, and bioinformaticians, who will be provided with interoperable resources containing data which can be mined and integrated. Database URL: http://bioit.fleming.gr/mrb PMID:20627861

  13. National Geophysical Data Center Historical Natural Hazard Event Databases

    NASA Astrophysics Data System (ADS)

    Dunbar, P. K.; Stroker, K. J.

    2008-12-01

    After a major event such as the 2004 Indian Ocean tsunami or the 2008 Chengdu earthquake, there is interest in knowing if similar events have occurred in the area in the past and how often they have occurred. The National Geophysical Data Center (NGDC) historical natural hazard event databases can provide answers to these types of questions. For example, a search of the tsunami database reveals that over 100 tsunamis have occurred in the Indian Ocean since 416 A.D. Further analysis shows that there has never been such a deadly tsunami anywhere in the world. In fact, the 2004 event accounts for almost half of all the deaths caused by tsunamis in the database. A search of the earthquake database shows that since 193 B.C., China has experienced over 500 significant earthquakes that have caused over 2 million deaths and innumerable dollars in damages. The NGDC global historical tsunami, significant earthquake, and significant volcanic eruption databases include events that range in date from 4350 B.C. to the present. The database includes all tsunami events, regardless of magnitude or intensity; and all earthquakes and volcanic eruptions that either caused deaths, moderate damage, or generated a tsunami. Earthquakes are also included that were assigned either a magnitude >= 7.5 or Modified Mercalli Intensity >= X. The basic data in the historical event databases include the date, time, location of the event, magnitude of the phenomenon (tsunami or earthquake magnitude and/or intensity, or volcanic explosivity index), and socio-economic information such as the total number of deaths, injuries, houses damaged, and dollar damage. The tsunami database includes an additional table with information on the runups (locations where tsunami waves were observed by eyewitnesses, tide gauges, or deep ocean sensors). The volcanic eruptions database includes information on the volcano elevation and type. There are currently over 2000 tsunami source events, 12500 tsunami runup locations, 5700 earthquakes, and 460 volcanic eruptions in the databases. The natural hazard event databases are stored in a relational database management system (RDBMS) which facilitates the integration and access to these related databases. For example, users can search for destructive earthquakes that preceded a volcanic eruption that then generated a damaging tsunami. The databases are accessible over the Web as tables, reports, and interactive maps. The maps provide integrated web-based GIS access to individual GIS layers including the natural hazard events and various spatial reference layers such as topography, population density, and political boundaries.

  14. Lectindb: a plant lectin database.

    PubMed

    Chandra, Nagasuma R; Kumar, Nirmal; Jeyakani, Justin; Singh, Desh Deepak; Gowda, Sharan B; Prathima, M N

    2006-10-01

    Lectins, a class of carbohydrate-binding proteins, are now widely recognized to play a range of crucial roles in many cell-cell recognition events triggering several important cellular processes. They encompass different members that are diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities, and specificities as well as their larger biological roles and potential applications. It is not surprising, therefore, that the vast amount of experimental data on lectins available in the literature is so diverse, that it becomes difficult and time consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. To achieve an effective use of all the data toward understanding the function and their possible applications, an organization of these seemingly independent data into a common framework is essential. An integrated knowledge base ( Lectindb, http://nscdb.bic.physics.iisc.ernet.in ) together with appropriate analytical tools has therefore been developed initially for plant lectins by collating and integrating diverse data. The database has been implemented using MySQL on a Linux platform and web-enabled using PERL-CGI and Java tools. Data for each lectin pertain to taxonomic, biochemical, domain architecture, molecular sequence, and structural details as well as carbohydrate and hence blood group specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value not only for basic studies in lectin biology but also for basic studies in pursuing several applications in biotechnology, immunology, and clinical practice, using these molecules. PMID:16782824

  15. Database of biologically active peptide sequences.

    PubMed

    Dziuba, J; Minkiewicz, P; Nałecz, D; Iwaniak, A

    1999-06-01

    Proteins are sources of many peptides with diverse biological activity. Such peptides are considered as valuable components of foods with desired and designed biological activity. Two strategies are currently recommended for research in the area of biological activity of food protein fragments. The first strategy covers investigations on products of enzymic hydrolysis of proteins. The second one is synthesis of peptides identical with protein fragments and investigations using these peptides. It is possible to predict biological activity of protein fragments using sequence alignments between proteins and biologically active peptides from database. Our database contains currently 527 sequences of bioactive peptides with antihypertensive, opioid, immunomodulating and other activities. The sequence alignments can give information about localization of biologically active fragments in protein chain, but not about possibilities of enzymic release of such fragments. The information is thus equivalent with this obtained using synthetic peptides identical with protein fragments. Possibilities offered by the database are discussed using wheat alpha/beta-gliadin, bovine beta-lactoglobulin and bovine beta-casein (including influence of genetic polymorphism and genetic engineering on amino acid sequences) as examples. PMID:10399353

  16. Exploiting relational database technology in a GIS

    NASA Astrophysics Data System (ADS)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  17. The CTBTO Link to the ISC Database

    NASA Astrophysics Data System (ADS)

    Lentas, Konstantinos; Storchak, Dmitry; Bondr, Istvn; Harris, James

    2014-05-01

    The CTBTO (Comprehensive Test-Ban Treaty Organisation) link to the International Seismological Centre's (ISC) database is a collection of tools and graphical interfaces for analysing and plotting the datasets maintained by the ISC. The ISC database includes the seismicity of the Earth reported by national seismological agencies around the world, mining induced events as well as nuclear and chemical explosions. The service gives special access to the CTBTO and the national data centres, via simple database queries. Four main search tools are available: the area based search (spatio-temporal search based on the ISC Bulletin), the REB based search (spatio-temporal search based on specific events in the REB), the ground truth (GT) based search (spatio-temporal search based on IASPEI Reference Event list) and the International Monitoring System (IMS) station based search (historical reporting patterns of seismic stations close to a selected IMS seismic station). The link provides details on seismicity, frequency-magnitude distributions, network hypocentre comparisons, individual station data and waveform request tools, as well as a hypocentre relocation facility using the ISC locator for the events in the IASPEI Reference Event list. Moreover, a new waveform tool is currently under development, taking into account earthquake magnitude, epicentral distance and station quality criteria, in order to provide a more comprehensive and efficient visualisation of the available non-IMS waveforms of the REB events.

  18. Genome Statute and Legislation Database

    MedlinePlus

    ... topic, visit the National Society of Genetic Counselors . Search the database: Content Type: And/Or Topic: And/ ... works best. Do not quotes or special characters.) Search Tips You may select one or more from ...

  19. SUPERSITES INTEGRATED RELATIONAL DATABASE (SIRD)

    EPA Science Inventory

    As part of EPA's Particulate Matter (PM) Supersites Program (Program), the University of Maryland designed and developed the Supersites Integrated Relational Database (SIRD). Measurement data in SIRD include comprehensive air quality data from the 7 Supersite program locations f...

  20. LDEF meteoroid and debris database

    NASA Technical Reports Server (NTRS)

    Dardano, C. B.; See, Thomas H.; Zolensky, Michael E.

    1994-01-01

    The Long Duration Exposure Facility (LDEF) Meteoroid and Debris Special Investigation Group (M&D SIG) database is maintained at the Johnson Space Center (JSC), Houston, Texas, and consists of five data tables containing information about individual features, digitized images of selected features, and LDEF hardware (i.e., approximately 950 samples) archived at JSC. About 4000 penetrations (greater than 300 micron in diameter) and craters (greater than 500 micron in diameter) were identified and photodocumented during the disassembly of LDEF at the Kennedy Space Center (KSC), while an additional 4500 or so have subsequently been characterized at JSC. The database also contains some data that have been submitted by various PI's, yet the amount of such data is extremely limited in its extent, and investigators are encouraged to submit any and all M&D-type data to JSC for inclusion within the M&D database. Digitized stereo-image pairs are available for approximately 4500 features through the database.

  1. Household Products Database: Personal Care

    MedlinePlus

    ... Information Database ©2001-2015 by DeLima Associates. All rights reserved. Home | Brands | Manufacturers | Ingredients | Health Effects Copyright , Privacy , Accessibility , Freedom of Information Act U.S. National Library of Medicine , ...

  2. Fun Databases: My Top Ten.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1992-01-01

    Provides reviews of 10 online databases: Consumer Reports; Public Opinion Online; Encyclopedia of Associations; Official Airline Guide Adventure Atlas and Events Calendar; CENDATA; Hollywood Hotline; Fearless Taster; Soap Opera Summaries; and Human Sexuality. (LRW)

  3. Navy precision optical interferometer database

    NASA Astrophysics Data System (ADS)

    Ryan, K. K.; Jorgensen, A. M.; Hall, T.; Armstrong, J. T.; Hutter, D.; Mozurkewich, D.

    2012-07-01

    The Navy Precision Optical Interferometer (NPOI) has now been recording astronomical observations for the better part of two decades. During that time period hundreds of thousands of observations have been obtained, with a total data volume of multiple terabytes. Additionally, in the next few years the data rate from the NPOI is expected to increase significantly. To make it easier for NPOI users to search the NPOI observations and to make it easier for them to obtain data, we have constructed a easily accessible and searchable database of observations. The database is based on a MySQL server and uses standard query language (SQL). In this paper we will describe the database table layout and show examples of possible database queries.

  4. Marine and Hydrokinetic Technology Database

    DOE Data Explorer

    DOE’s Marine and Hydrokinetic Technology Database provides up-to-date information on marine and hydrokinetic renewable energy, both in the U.S. and around the world. The database includes wave, tidal, current, and ocean thermal energy, and contains information on the various energy conversion technologies, companies active in the field, and development of projects in the water. Depending on the needs of the user, the database can present a snapshot of projects in a given region, assess the progress of a certain technology type, or provide a comprehensive view of the entire marine and hydrokinetic energy industry. Results are displayed as a list of technologies, companies, or projects. Data can be filtered by a number of criteria, including country/region, technology type, generation capacity, and technology or project stage. The database was updated in 2009 to include ocean thermal energy technologies, companies, and projects.

  5. InterAction Database (IADB)

    Cancer.gov

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  6. Small Business Innovations (Integrated Database)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  7. World electric power plants database

    SciTech Connect

    2006-06-15

    This global database provides records for 104,000 generating units in over 220 countries. These units include installed and projected facilities, central stations and distributed plants operated by utilities, independent power companies and commercial and self-generators. Each record includes information on: geographic location and operating company; technology, fuel and boiler; generator manufacturers; steam conditions; unit capacity and age; turbine/engine; architect/engineer and constructor; and pollution control equipment. The database is issued quarterly.

  8. The Binary Star Database -- BDB

    NASA Astrophysics Data System (ADS)

    Malkov, O.; Oblak, E.; Debray, B.

    2009-09-01

    The Binary Star Database (http://bdb.obs-besancon.fr) (BDB) provides astronomers with data for binary and multiple stars from all observational categories. We present the current structure of the database and the form and content of the data. We also discuss the implementation issues that arise in integrating heterogeneous data sources, the object identification problems,and our future work to extend the capabilities of BDB.

  9. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  10. The new international GLE database

    NASA Astrophysics Data System (ADS)

    Duldig, M. L.; Watts, D. J.

    2001-08-01

    The Australian Antarctic Division has agreed to host the international GLE database. Access to the database is via a world-wide-web interface and initially covers all GLEs since the start of the 22nd solar cycle. Access restriction for recent events is controlled by password protection and these data are available only to those groups contributing data to the database. The restrictions to data will be automatically removed for events older than 2 years, in accordance with the data exchange provisions of the Antarctic Treaty. Use of the data requires acknowledgment of the database as the source of the data and acknowledgment of the specific groups that provided the data used. Furthermore, some groups that provide data to the database have specific acknowledgment requirements or wording. A new submission format has been developed that will allow easier exchange of data, although the old format will be acceptable for some time. Data download options include direct web based download and email. Data may also be viewed as listings or plots with web browsers. Search options have also been incorporated. Development of the database will be ongoing with extension to viewing and delivery options, addition of earlier data and the development of mirror sites. It is expected that two mirror sites, one in North America and one in Europe, will be developed to enable fast access for the whole cosmic ray community.

  11. Rice Glycosyltransferase (GT) Phylogenomic Database

    DOE Data Explorer

    Ronald, Pamela

    The Ronald Laboratory staff at the University of California-Davis has a primary research focus on the genes of the rice plant. They study the role that genetics plays in the way rice plants respond to their environment. They created the Rice GT Database in order to integrate functional genomic information for putative rice Glycosyltransferases (GTs). This database contains information on nearly 800 putative rice GTs (gene models) identified by sequence similarity searches based on the Carbohydrate Active enZymes (CAZy) database. The Rice GT Database provides a platform to display user-selected functional genomic data on a phylogenetic tree. This includes sequence information, mutant line information, expression data, etc. An interactive chromosomal map shows the position of all rice GTs, and links to rice annotation databases are included. The format is intended to "facilitate the comparison of closely related GTs within different families, as well as perform global comparisons between sets of related families." [From http://ricephylogenomics.ucdavis.edu/cellwalls/gt/genInfo.shtml] See also the primary paper discussing this work: Peijian Cao, Laura E. Bartley, Ki-Hong Jung and Pamela C. Ronalda. Construction of a Rice Glycosyltransferase Phylogenomic Database and Identification of Rice-Diverged Glycosyltransferases. Molecular Plant, 2008, 1(5): 858-877.

  12. DMTB: the magnetotactic bacteria database

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Lin, W.

    2012-12-01

    Magnetotactic bacteria (MTB) are of interest in biogeomagnetism, rock magnetism, microbiology, biomineralization, and advanced magnetic materials because of their ability to synthesize highly ordered intracellular nano-sized magnetic minerals, magnetite or greigite. Great strides for MTB studies have been made in the past few decades. More than 600 articles concerning MTB have been published. These rapidly growing data are stimulating cross disciplinary studies in such field as biogeomagnetism. We have compiled the first online database for MTB, i.e., Database of Magnestotactic Bacteria (DMTB, http://database.biomnsl.com). It contains useful information of 16S rRNA gene sequences, oligonucleotides, and magnetic properties of MTB, and corresponding ecological metadata of sampling sites. The 16S rRNA gene sequences are collected from the GenBank database, while all other data are collected from the scientific literature. Rock magnetic properties for both uncultivated and cultivated MTB species are also included. In the DMTB database, data are accessible through four main interfaces: Site Sort, Phylo Sort, Oligonucleotides, and Magnetic Properties. References in each entry serve as links to specific pages within public databases. The online comprehensive DMTB will provide a very useful data resource for researchers from various disciplines, e.g., microbiology, rock magnetism and paleomagnetism, biogeomagnetism, magnetic material sciences and others.

  13. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  14. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, Elisabeth; Adler, Silke; Ungersböck, Markus; Zach-Hermann, Susanne

    2010-05-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". The Societas Meteorologicae Palatinae at Mannheim well known for its first European wide meteorological network also established a phenological network which was active from 1781 to 1792. Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies, as one has to address many National Observations Programs (NOP) to get access to the data before one can start to bring them in a uniform style. From 2004 to 2005 the COST-action 725 was running with the main objective to establish a European reference data set of phenological observations that can be used for climatological purposes, especially climate monitoring, and detection of changes. So far the common database/reference data set of COST725 comprises 7687248 data from 7285 observation sites in 15 countries and International Phenological Gardens (IPG) spanning the timeframe from 1951 to 2000. ZAMG is hosting the database. In January 2010 PEP725 has started and will take over not only the part of maintaining, updating the database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  15. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  16. Historical hydrology and database on flood events (Apulia, southern Italy)

    NASA Astrophysics Data System (ADS)

    Lonigro, Teresa; Basso, Alessia; Gentile, Francesco; Polemio, Maurizio

    2014-05-01

    Historical data about floods represent an important tool for the comprehension of the hydrological processes, the estimation of hazard scenarios as a basis for Civil Protection purposes, as a basis of the rational land use management, especially in karstic areas, where time series of river flows are not available and the river drainage is rare. The research shows the importance of the improvement of existing flood database with an historical approach, finalized to collect past or historical floods event, in order to better assess the occurrence trend of floods, in the case for the Apulian region (south Italy). The main source of records of flood events for Apulia was the AVI (the acronym means Italian damaged areas) database, an existing Italian database that collects data concerning damaging floods from 1918 to 1996. The database was expanded consulting newspapers, publications, and technical reports from 1996 to 2006. In order to expand the temporal range further data were collected searching in the archives of regional libraries. About 700 useful news from 17 different local newspapers were found from 1876 to 1951. From a critical analysis of the 700 news collected since 1876 to 1952 only 437 were useful for the implementation of the Apulia database. The screening of these news showed the occurrence of about 122 flood events in the entire region. The district of Bari, the regional main town, represents the area in which the great number of events occurred; the historical analysis confirms this area as flood-prone. There is an overlapping period (from 1918 to 1952) between old AVI database and new historical dataset obtained by newspapers. With regard to this period, the historical research has highlighted new flood events not reported in the existing AVI database and it also allowed to add more details to the events already recorded. This study shows that the database is a dynamic instrument, which allows a continuous implementation of data, even in real time. More details on previous results of this research activity were recently published (Polemio, 2010; Basso et al., 2012; Lonigro et al., 2013) References Basso A., Lonigro T. and Polemio M. (2012) "The improvement of historical database on damaging hydrogeological events in the case of Apulia (Southern Italy)". Rendiconti online della Società Geologica Italiana, 21: 379-380; Lonigro T., Basso A. and Polemio M. (2013) "Historical database on damaging hydrogeological events in Apulia region (Southern Italy)". Rendiconti online della Società Geologica Italiana, 24: 196-198; Polemio M. (2010) "Historical floods and a recent extreme rainfall event in the Murgia karstic environment (Southern Italy)". Zeitschrift für Geomorphologie, 54(2): 195-219.

  17. The IPD and IMGT/HLA database: allele variant databases

    PubMed Central

    Robinson, James; Halliwell, Jason A.; Hayhurst, James D.; Flicek, Paul; Parham, Peter; Marsh, Steven G. E.

    2015-01-01

    The Immuno Polymorphism Database (IPD) was developed to provide a centralized system for the study of polymorphism in genes of the immune system. Through the IPD project we have established a central platform for the curation and publication of locus-specific databases involved either directly or related to the function of the Major Histocompatibility Complex in a number of different species. We have collaborated with specialist groups or nomenclature committees that curate the individual sections before they are submitted to IPD for online publication. IPD consists of five core databases, with the IMGT/HLA Database as the primary database. Through the work of the various nomenclature committees, the HLA Informatics Group and in collaboration with the European Bioinformatics Institute we are able to provide public access to this data through the website http://www.ebi.ac.uk/ipd/. The IPD project continues to develop with new tools being added to address scientific developments, such as Next Generation Sequencing, and to address user feedback and requests. Regular updates to the website ensure that new and confirmatory sequences are dispersed to the immunogenetics community, and the wider research and clinical communities. PMID:25414341

  18. A database of macromolecular motions.

    PubMed Central

    Gerstein, M; Krebs, W

    1998-01-01

    We describe a database of macromolecular motions meant to be of general use to the structural community. The database, which is accessible on the World Wide Web with an entry point at http://bioinfo.mbb.yale.edu/MolMovDB , attempts to systematize all instances of protein and nucleic acid movement for which there is at least some structural information. At present it contains >120 motions, most of which are of proteins. Protein motions are further classified hierarchically into a limited number of categories, first on the basis of size (distinguishing between fragment, domain and subunit motions) and then on the basis of packing. Our packing classification divides motions into various categories (shear, hinge, other) depending on whether or not they involve sliding over a continuously maintained and tightly packed interface. In addition, the database provides some indication about the evidence behind each motion (i.e. the type of experimental information or whether the motion is inferred based on structural similarity) and attempts to describe many aspects of a motion in terms of a standardized nomenclature (e.g. the maximum rotation, the residue selection of a fixed core, etc.). Currently, we use a standard relational design to implement the database. However, the complexity and heterogeneity of the information kept in the database makes it an ideal application for an object-relational approach, and we are moving it in this direction. Specifically, in terms of storing complex information, the database contains plausible representations for motion pathways, derived from restrained 3D interpolation between known endpoint conformations. These pathways can be viewed in a variety of movie formats, and the database is associated with a server that can automatically generate these movies from submitted coordinates. PMID:9722650

  19. Waste Tank Vapor Project: Tank vapor database development

    SciTech Connect

    Seesing, P.R.; Birn, M.B.; Manke, K.L.

    1994-09-01

    The objective of the Tank Vapor Database (TVD) Development task in FY 1994 was to create a database to store, retrieve, and analyze data collected from the vapor phase of Hanford waste tanks. The data needed to be accessible over the Hanford Local Area Network to users at both Westinghouse Hanford Company (WHC) and Pacific Northwest Laboratory (PNL). The data were restricted to results published in cleared reports from the laboratories analyzing vapor samples. Emphasis was placed on ease of access and flexibility of data formatting and reporting mechanisms. Because of time and budget constraints, a Rapid Application Development strategy was adopted by the database development team. An extensive data modeling exercise was conducted to determine the scope of information contained in the database. a A SUN Sparcstation 1000 was procured as the database file server. A multi-user relational database management system, Sybase{reg_sign}, was chosen to provide the basic data storage and retrieval capabilities. Two packages were chosen for the user interface to the database: DataPrism{reg_sign} and Business Objects{trademark}. A prototype database was constructed to provide the Waste Tank Vapor Project`s Toxicology task with summarized and detailed information presented at Vapor Conference 4 by WHC, PNL, Oak Ridge National Laboratory, and Oregon Graduate Institute. The prototype was used to develop a list of reported compounds, and the range of values for compounds reported by the analytical laboratories using different sample containers and analysis methodologies. The prototype allowed a panel of toxicology experts to identify carcinogens and compounds whose concentrations were within the reach of regulatory limits. The database and user documentation was made available for general access in September 1994.

  20. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved with greater cross-communication between institutions of the consortium. Conclusion: The investigation described herein demonstrates the successful data collection from multiple institutions in the context of a collaborative effort. The data presented here can be utilized as the basis for further collaborative efforts and/or development of larger and more streamlined databases within the consortium. PMID:27092293

  1. EPILEPSIAE - a European epilepsy database.

    PubMed

    Ihle, Matthias; Feldwisch-Drentrup, Hinnerk; Teixeira, César A; Witon, Adrien; Schelter, Björn; Timmer, Jens; Schulze-Bonhage, Andreas

    2012-06-01

    With a worldwide prevalence of about 1%, epilepsy is one of the most common serious brain diseases with profound physical, psychological and, social consequences. Characteristic symptoms are seizures caused by abnormally synchronized neuronal activity that can lead to temporary impairments of motor functions, perception, speech, memory or, consciousness. The possibility to predict the occurrence of epileptic seizures by monitoring the electroencephalographic activity (EEG) is considered one of the most promising options to establish new therapeutic strategies for the considerable fraction of patients with currently insufficiently controlled seizures. Here, a database is presented which is part of an EU-funded project "EPILEPSIAE" aiming at the development of seizure prediction algorithms which can monitor the EEG for seizure precursors. High-quality, long-term continuous EEG data, enriched with clinical metadata, which so far have not been available, are managed in this database as a joint effort of epilepsy centers in Portugal (Coimbra), France (Paris) and Germany (Freiburg). The architecture and the underlying schema are here reported for this database. It was designed for an efficient organization, access and search of the data of 300 epilepsy patients, including high quality long-term EEG recordings, obtained with scalp and intracranial electrodes, as well as derived features and supplementary clinical and imaging data. The organization of this European database will allow for accessibility by a wide spectrum of research groups and may serve as a model for similar databases planned for the future. PMID:20863589

  2. Reference ballistic imaging database performance.

    PubMed

    De Kinder, Jan; Tulleners, Frederic; Thiebaut, Hugues

    2004-03-10

    Ballistic imaging databases allow law enforcement to link recovered cartridge cases to other crime scenes and to firearms. The success of these databases has led many to propose that all firearms in circulation be entered into a reference ballistic image database (RBID). To assess the performance of an RBID, we fired 4200 cartridge cases from 600 9mm Para Sig Sauer model P226 series pistols. Each pistol fired two Remington cartridges, one of which was imaged in the RBID, and five additional cartridges, consisting of Federal, Speer, Winchester, Wolf, and CCI brands. Randomly selected samples from the second series of Remington cartridge cases and from the five additional brands were then correlated against the RBID. Of the 32 cartridges of the same make correlated against the RBID, 72% ranked in the top 10 positions. Likewise, of the 160 cartridges of the five different brands correlated against the database, 21% ranked in the top 10 positions. Generally, the ranking position increased as the size of the RBID increased. We obtained similar results when we expanded the RBID to include firearms with the same class characteristics for breech face marks, firing pin impressions, and extractor marks. The results of our six queries against the RBID indicate that a reference ballistics image database of new guns is currently fraught with too many difficulties to be an effective and efficient law enforcement tool. PMID:15036442

  3. REDIdb: the RNA editing database

    PubMed Central

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at . PMID:17175530

  4. Stratospheric emissions effects database development

    SciTech Connect

    Baughcum, S.L.; Henderson, S.C.; Hertel, P.S.; Maggiora, D.R.; Oncina, C.A.

    1994-07-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  5. Stratospheric emissions effects database development

    NASA Technical Reports Server (NTRS)

    Baughcum, Steven L.; Henderson, Stephen C.; Hertel, Peter S.; Maggiora, Debra R.; Oncina, Carlos A.

    1994-01-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  6. National Residential Efficiency Measures Database

    DOE Data Explorer

    The National Residential Efficiency Measures Database is a publicly available, centralized resource of residential building retrofit measures and costs for the U.S. building industry. With support from the U.S. Department of Energy, NREL developed this tool to help users determine the most cost-effective retrofit measures for improving energy efficiency of existing homes. Software developers who require residential retrofit performance and cost data for applications that evaluate residential efficiency measures are the primary audience for this database. In addition, home performance contractors and manufacturers of residential materials and equipment may find this information useful. The database offers the following types of retrofit measures: 1) Appliances, 2) Domestic Hot Water, 3) Enclosure, 4) Heating, Ventilating, and Air Conditioning (HVAC), 5) Lighting, 6) Miscellaneous.

  7. DOE Global Energy Storage Database

    DOE Data Explorer

    The DOE International Energy Storage Database has more than 400 documented energy storage projects from 34 countries around the world. The database provides free, up-to-date information on grid-connected energy storage projects and relevant state and federal policies. More than 50 energy storage technologies are represented worldwide, including multiple battery technologies, compressed air energy storage, flywheels, gravel energy storage, hydrogen energy storage, pumped hydroelectric, superconducting magnetic energy storage, and thermal energy storage. The policy section of the database shows 18 federal and state policies addressing grid-connected energy storage, from rules and regulations to tariffs and other financial incentives. It is funded through DOE’s Sandia National Laboratories, and has been operating since January 2012.

  8. Seismic databases of The Caucasus

    NASA Astrophysics Data System (ADS)

    Gunia, I.; Sokhadze, G.; Mikava, D.; Tvaradze, N.; Godoladze, T.

    2012-12-01

    The Caucasus is one of the active segments of the Alpine-Himalayan collision belt. The region needs continues seismic monitoring systems for better understanding of tectonic processes going in the region. Seismic Monitoring Center of Georgia (Ilia State University) is operating the digital seismic network of the country and is also collecting and exchanging data with neighboring countries. The main focus of our study was to create seismic database which is well organized, easily reachable and is convenient for scientists to use. The seismological database includes the information about more than 100 000 earthquakes from the whole Caucasus. We have to mention that it includes data from analog and digital seismic networks. The first analog seismic station in Georgia was installed in 1899 in the Caucasus in Tbilisi city. The number of analog seismic stations was increasing during next decades and in 1980s about 100 analog stations were operated all over the region. From 1992 due to political and economical situation the number of stations has been decreased and in 2002 just two analog equipments was operated. New digital seismic network was developed in Georgia since 2003. The number of digital seismic stations was increasing and in current days there are more than 25 digital stations operating in the country. The database includes the detailed information about all equipments installed on seismic stations. Database is available online. That will make convenient interface for seismic data exchange data between Caucasus neighboring countries. It also makes easier both the seismic data processing and transferring them to the database and decreases the operator's mistakes during the routine work. The database was created using the followings: php, MySql, Javascript, Ajax, GMT, Gmap, Hypoinverse.

  9. NEOBASE: databasing the neocortical microcircuit.

    PubMed

    Muhammad, Asif Jan; Markram, Henry

    2005-01-01

    Mammals adapt to a rapidly changing world because of the sophisticated perceptual and cognitive function enabled by the neocortex. The neocortex, which has expanded to constitute nearly 80% of the human brain seems to have arisen from repeated duplication of a stereotypical template of neurons and synaptic circuits with subtle specializations in different brain regions and species. Determining the design and function of this microcircuitry is therefore of paramount importance to understanding normal and abnormal higher brain function. Recent advances in recording synaptically-coupled neurons has allowed rapid dissection of the neocortical microcircuitry thus yielding a massive amount of quantitative anatomical, electrical and gene expression data on the neurons and the synaptic circuits that connect the neurons. Due to the availability of the above mentioned data, it has now become imperative to database the neurons of the microcircuit and their synaptic connections. The NEOBASE project, aims to archive the neocortical microcircuit data in a manner that facilitates development of advanced data mining applications, statistical and bioinformatics analyses tools, custom microcircuit builders, and visualization and simulation applications. The database architecture is based on ROOT, a software environment that allows the construction of an object oriented database with numerous relational capabilities. The proposed architecture allows construction of a database that closely mimics the architecture of the real microcircuit, which facilitates the interface with virtually any application, allows for data format evolution, and aims for full interoperability with other databases. NEOBASE will provide an important resource and research tool for studying the microcircuit basis of normal and abnormal neocortical function. The database will be available to local as well as remote users using Grid based tools and technologies. PMID:15923726

  10. Federal Register Document Image Database, Volume 1

    National Institute of Standards and Technology Data Gateway

    NIST Federal Register Document Image Database, Volume 1 (PC database for purchase)   NIST has produced a new document image database for evaluating document analysis and recognition technologies and information retrieval systems. NIST Special Database 25 contains page images from the 1994 Federal Register and much more.

  11. Brede tools and federating online neuroinformatics databases.

    PubMed

    Nielsen, Finn rup

    2014-01-01

    As open science neuroinformatics databases the Brede Database and Brede Wiki seek to make distribution and federation of their content as easy and transparent as possible. The databases rely on simple formats and allow other online tools to reuse their content. This paper describes the possible interconnections on different levels between the Brede tools and other databases. PMID:23666785

  12. Building Databases for Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Klausmeier, Jane A.

    This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…

  13. Database Driven Web Systems for Education.

    ERIC Educational Resources Information Center

    Garrison, Steve; Fenton, Ray

    1999-01-01

    Provides technical information on publishing to the Web. Demonstrates some new applications in database publishing. Discusses the difference between static and database-drive Web pages. Reviews failures and successes of a Web database system. Addresses the question of how to build a database-drive Web site, discussing connectivity software, Web…

  14. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  15. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the

  16. Freshwater Biological Traits Database (External Review Draft)

    EPA Science Inventory

    This draft report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also augmented with additional traits that are relevant to detecting climate change-related ef...

  17. WMC Database Evaluation. Case Study Report

    SciTech Connect

    Palounek, Andrea P. T

    2015-10-29

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  18. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Masuyama, Keiichi

    CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.

  19. Quality control of EUVE databases

    NASA Technical Reports Server (NTRS)

    John, L. M.; Drake, J.

    1992-01-01

    The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.

  20. Coal quality databases: Practical applications

    SciTech Connect

    Finkelman, R.B.; Gross, P.M.K.

    1999-07-01

    Domestic and worldwide coal use will be influenced by concerns about the effects of coal combustion on the local, regional and global environment. Reliable coal quality data can help decision-makers to better assess risks and determine impacts of coal constituents on technological behavior, economic byproduct recovery, and environmental and human health issues. The US Geological Survey (USGS) maintains an existing coal quality database (COALQUAL) that contains analyses of approximately 14,000 col samples from every major coal-producing basin in the US. For each sample, the database contains results of proximate and ultimate analyses; sulfur form data; and major, minor, and trace element concentrations for approximately 70 elements

  1. Approximate search in image database

    NASA Astrophysics Data System (ADS)

    Ferro, Alfredo; Gallo, Giovanni; Giugno, Rosalba

    1999-12-01

    This paper present a new approach to content based retrieval in image databases. The basic new idea in the proposed technique is to organize the quantized and truncated wavelet coefficient of an image into a suitable tree structure. The tree structure respects the natural hierarchy imposed on the coefficients by the successive resolution levels. Al the trees relative to the images in a database are organized into a trie. This structure helps in the error tolerant retrieval of queries. The result obtained show that this approach is promising provided that a suitable distance function between trees is adopted.

  2. A comparison of biomedical databases.

    PubMed Central

    Mychko-Megrin, A Y

    1991-01-01

    Various published bibliographic and abstract services covering the period 1970-1988 were compared to analyze scope and coverage. A total of 7,281 articles and book titles (1,655 Soviet and 5,626 foreign) were selected on forty-one topics in different medical fields. The titles originated from three different samples but included all Soviet medical literature on the subjects. A distribution of biomedical serials from five databases is given by country, and twelve indices to assess the quality of biomedical databases are suggested. PMID:1884085

  3. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  4. BDB: The Binary Star Database

    NASA Astrophysics Data System (ADS)

    Dluzhnevskaya, O.; Kaygorodov, P.; Kovaleva, D.; Malkov, O.

    2014-05-01

    Description of the Binary star DataBase (BDB, http://bdb.inasan.ru), the world's principal database of binary and multiple systems of all observational types, is presented in the paper. BDB contains data on physical and positional parameters of 100,000 components of 40,000 systems of multiplicity 2 to 20, belonging to various observational types: visual, spectroscopic, eclipsing, etc. Information on these types of binaries is obtained from heterogeneous sources of data - astronomical and. Organization of the information is based on the careful cross-identification of the objects. BDB can be queried by star identifier, coordinates, and other parameters.

  5. FLOPROS: an evolving global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, P.; Aerts, J. C. J. H.; Jongman, B.; Bouwer, L. M.; Winsemius, H. C.; de Moel, H.; Ward, P. J.

    2015-12-01

    With the projected changes in climate, population and socioeconomic activity located in flood-prone areas, the global assessment of the flood risk is essential to inform climate change policy and disaster risk management. Whilst global flood risk models exist for this purpose, the accuracy of their results is greatly limited by the lack of information on the current standard of protection to floods, with studies either neglecting this aspect or resorting to crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, which comprises information in the form of the flood return period associated with protection measures, at different spatial scales. FLOPROS comprises three layers of information, and combines them into one consistent database. The Design layer contains empirical information about the actual standard of existing protection already in place, while the Policy layer and the Model layer are proxies for such protection standards, and serve to increase the spatial coverage of the database. The Policy layer contains information on protection standards from policy regulations; and the Model layer uses a validated modeling approach to calculate protection standards. Based on this first version of FLOPROS, we suggest a number of strategies to further extend and increase the resolution of the database. Moreover, as the database is intended to be continually updated, while flood protection standards are changing with new interventions, FLOPROS requires input from the flood risk community. We therefore invite researchers and practitioners to contribute information to this evolving database by corresponding to the authors.

  6. FLOPROS: an evolving global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen C. J. H.; Jongman, Brenden; Bouwer, Laurens M.; Winsemius, Hessel C.; de Moel, Hans; Ward, Philip J.

    2016-05-01

    With projected changes in climate, population and socioeconomic activity located in flood-prone areas, the global assessment of flood risk is essential to inform climate change policy and disaster risk management. Whilst global flood risk models exist for this purpose, the accuracy of their results is greatly limited by the lack of information on the current standard of protection to floods, with studies either neglecting this aspect or resorting to crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, which comprises information in the form of the flood return period associated with protection measures, at different spatial scales. FLOPROS comprises three layers of information, and combines them into one consistent database. The design layer contains empirical information about the actual standard of existing protection already in place; the policy layer contains information on protection standards from policy regulations; and the model layer uses a validated modelling approach to calculate protection standards. The policy layer and the model layer can be considered adequate proxies for actual protection standards included in the design layer, and serve to increase the spatial coverage of the database. Based on this first version of FLOPROS, we suggest a number of strategies to further extend and increase the resolution of the database. Moreover, as the database is intended to be continually updated, while flood protection standards are changing with new interventions, FLOPROS requires input from the flood risk community. We therefore invite researchers and practitioners to contribute information to this evolving database by corresponding to the authors.

  7. View discovery in OLAP databases through statistical combinatorial optimization

    SciTech Connect

    Hengartner, Nick W; Burke, John; Critchlow, Terence; Joslyn, Cliff; Hogan, Emilie

    2009-01-01

    OnLine Analytical Processing (OLAP) is a relational database technology providing users with rapid access to summary, aggregated views of a single large database, and is widely recognized for knowledge representation and discovery in high-dimensional relational databases. OLAP technologies provide intuitive and graphical access to the massively complex set of possible summary views available in large relational (SQL) structured data repositories. The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of 'views' of an OLAP database as a combinatorial object of all projections and subsets, and 'view discovery' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline 'hop-chaining' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a 'spiraling' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  8. Contaminated sediments database for Long Island Sound and the New York Bight

    USGS Publications Warehouse

    Mecray, Ellen L.; Reid, Jamey M.; Hastings, Mary E.; Buchholtz ten Brink, Marilyn R.

    2003-01-01

    The Contaminated Sediments Database for Long Island Sound and the New York Bight provides a compilation of published and unpublished sediment texture and contaminant data. This report provides maps of several of the contaminants in the database as well as references and a section on using the data to assess the environmental status of these coastal areas. The database contains information collected between 1956-1997; providing an historical foundation for future contaminant studies in the region.

  9. NLTE4 Plasma Population Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase)   This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.

  10. Coal database for Cook Inlet and North Slope, Alaska

    USGS Publications Warehouse

    Stricker, Gary D.; Spear, Brianne D.; Sprowl, Jennifer M.; Dietrich, John D.; McCauley, Michael I.; Kinney, Scott A.

    2011-01-01

    This database is a compilation of published and nonconfidential unpublished coal data from Alaska. Although coal occurs in isolated areas throughout Alaska, this study includes data only from the Cook Inlet and North Slope areas. The data include entries from and interpretations of oil and gas well logs, coal-core geophysical logs (such as density, gamma, and resistivity), seismic shot hole lithology descriptions, measured coal sections, and isolated coal outcrops.

  11. Mars global digital dune database: MC-30

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2012-01-01

    The Mars Global Digital Dune Database (MGD3) provides data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey Open-File Reports. The first report (Hayward and others, 2007) included dune fields from lat 65° N. to 65° S. (http://pubs.usgs.gov/of/2007/1158/). The second report (Hayward and others, 2010) included dune fields from lat 60° N. to 90° N. (http://pubs.usgs.gov/of/2010/1170/). This report encompasses ~75,000 km2 of mapped dune fields from lat 60° to 90° S. The dune fields included in this global database were initially located using Mars Odyssey Thermal Emission Imaging System (THEMIS) Infrared (IR) images. In the previous two reports, some dune fields may have been unintentionally excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100 m/pixel) certainly caused us to exclude smaller dune fields. In this report, mapping is more complete. The Arizona State University THEMIS daytime IR mosaic provided complete IR coverage, and it is unlikely that we missed any large dune fields in the South Pole (SP) region. In addition, the increased availability of higher resolution images resulted in the inclusion of more small (~1 km2) sand dune fields and sand patches. To maintain consistency with the previous releases, we have identified the sand features that would not have been included in earlier releases. While the moderate to large dune fields in MGD3 are likely to constitute the largest compilation of sediment on the planet, we acknowledge that our database excludes numerous small dune fields and some moderate to large dune fields as well. Please note that the absence of mapped dune fields does not mean that dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera (MOC) narrow angle, Mars Express High Resolution Stereo Camera, or Mars Reconnaissance Orbiter Context Camera and High Resolution Imaging Science Experiment images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the approximate prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model is also included. In addition to polygons locating dune fields, the database includes ~700 of the THEMIS VIS and MOC images that were used to build the database.

  12. The New NRL Crystallographic Database

    NASA Astrophysics Data System (ADS)

    Mehl, Michael; Curtarolo, Stefano; Hicks, David; Toher, Cormac; Levy, Ohad; Hart, Gus

    For many years the Naval Research Laboratory maintained an online graphical database of crystal structures for a wide variety of materials. This database has now been redesigned, updated and integrated with the AFLOW framework for high throughput computational materials discovery (http://materials.duke.edu/aflow.html). For each structure we provide an image showing the atomic positions; the primitive vectors of the lattice and the basis vectors of every atom in the unit cell; the space group and Wyckoff positions; Pearson symbols; common names; and Strukturbericht designations, where available. References for each structure are provided, as well as a Crystallographic Information File (CIF). The database currently includes almost 300 entries and will be continuously updated and expanded. It enables easy search of the various structures based on their underlying symmetries, either by Bravais lattice, Pearson symbol, Strukturbericht designation or commonly used prototypes. The talk will describe the features of the database, and highlight its utility for high throughput computational materials design. Work at NRL is funded by a Contract with the Duke University Department of Mechanical Engineering.

  13. The Hidden Dimensions of Databases.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1994-01-01

    Discusses methods of evaluating commercial online databases and provides examples that illustrate their hidden dimensions. Topics addressed include size, including the number of records or the number of titles; the number of years covered; and the frequency of updates. Comparisons of Readers' Guide Abstracts and Magazine Article Summaries are…

  14. Safeguarding Databases Basic Concepts Revisited.

    ERIC Educational Resources Information Center

    Cardinali, Richard

    1995-01-01

    Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)

  15. CROP GENOME DATABASES -- CRITICAL ISSUES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Crop genome databases, see www.agron.missouri.edu/bioservers.html of the past decade have had designed and implemented (1) models and schema for the genome and related domains; (2) methodologies for input of data by expert biologists and high-throughput projects; and (3) various text, graphical, and...

  16. Technostress: Surviving a Database Crash.

    ERIC Educational Resources Information Center

    Dobb, Linda S.

    1990-01-01

    Discussion of technostress in libraries focuses on a database crash at California Polytechnic State University, San Luis Obispo. Steps taken to restore the data are explained, strategies for handling technological accidents are suggested, the impact on library staff is discussed, and a 10-item annotated bibliography on technostress is provided.…

  17. Database automation of accelerator operation

    SciTech Connect

    Casstevens, B.J.; Ludemann, C.A.

    1982-01-01

    The Oak Ridge Isochronous Cyclotron (ORIC) is a variable energy, multiparticle accelerator that produces beams of energetic heavy ions which are used as probes to study the structure of the atomic nucleus. To accelerate and transmit a particular ion at a specified energy to an experimenter's apparatus, the electrical currents in up to 82 magnetic field producing coils must be established to accuracies of from 0.1 to 0.001 percent. Mechanical elements must also be positioned by means of motors or pneumatic drives. A mathematical model of this complex system provides a good approximation of operating parameters required to produce an ion beam. However, manual tuning of the system must be performed to optimize the beam quality. The database system was implemented as an on-line query and retrieval system running at a priority lower than the cyclotron real-time software. It was designed for matching beams recorded in the database with beams specified for experiments. The database is relational and permits searching on ranges of any subset of the eleven beam categorizing attributes. A beam file selected from the database is transmitted to the cyclotron general control software which handles the automatic slewing of power supply currents and motor positions to the file values, thereby replicating the desired parameters.

  18. GLOBAL EMISSIONS DATABASE (GLOED) DEMONSTRATION

    EPA Science Inventory

    The paper describes the EPA-developed Global Emissions Database (GloED) and how it works. t was prepared to accompany a demonstration of GloED, a powerful software package. loED is a user-friendly, menu-driven tool for storing and retriEving emissions factors and activity data on...

  19. POLYGONAL HYDROLOGY COVERAGE AND DATABASE

    EPA Science Inventory

    This coverage and dataset contain the polygonal hydrology for EPA Region 8. This coverage contains ponds, lakes, and linear hydrology that has been re-digitized for small scale mapping projects. The database is limited to just the pseudo items created by ArcInfo and one item use...

  20. Pathway Interaction Database (PID) —

    Cancer.gov

    The National Cancer Institute (NCI) in collaboration with Nature Publishing Group has established the Pathway Interaction Database (PID) in order to provide a highly structured, curated collection of information about known biomolecular interactions and key cellular processes assembled into signaling pathways.

  1. The NASA Fireball Network Database

    NASA Technical Reports Server (NTRS)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  2. Guide on Logical Database Design.

    ERIC Educational Resources Information Center

    Fong, Elizabeth N.; And Others

    This report discusses an iterative methodology for logical database design (LDD). The methodology includes four phases: local information-flow modeling, global information-flow modeling, conceptual schema design, and external schema modeling. These phases are intended to make maximum use of available information and user expertise, including the…

  3. The CMS Condition Database System

    NASA Astrophysics Data System (ADS)

    Di Guida, S.; Govi, G.; Ojeda, M.; Pfeiffer, A.; Sipos, R.

    2015-12-01

    The Condition Database plays a key role in the CMS computing infrastructure. The complexity of the detector and the variety of the sub-systems involved are setting tight requirements for handling the Conditions. In the last two years the collaboration has put a substantial effort in the re-design of the Condition Database system, with the aim at improving the scalability and the operability for the data taking starting in 2015. The re-design has focused on simplifying the architecture, using the lessons learned during the operation of the Run I data-taking period (20092013). In the new system the relational features of the database schema are mainly exploited to handle the metadata (Tag and Interval of Validity), allowing for a limited and controlled set of queries. The bulk condition data (Payloads) are stored as unstructured binary data, allowing the storage in a single table with a common layout for all of the condition data types. In this paper, we describe the full architecture of the system, including the services implemented for uploading payloads and the tools for browsing the database. Furthermore, the implementation choices for the core software will be discussed.

  4. HOED: Hypermedia Online Educational Database.

    ERIC Educational Resources Information Center

    Duval, E.; Olivie, H.

    This paper presents HOED, a distributed hypermedia client-server system for educational resources. The aim of HOED is to provide a library facility for hyperdocuments that is accessible via the world wide web. Its main application domain is education. The HOED database not only holds the educational resources themselves, but also data describing…

  5. Maize Genetics and Genomics Database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The 2007 report for MaizeGDB lists the new hires who will focus on curation/outreach and the genome sequence, respectively. Currently all sequence in the database comes from a PlantGDB pipeline and is presented with deep links to external resources such as PlantGDB, Dana Farber, GenBank, the Arizona...

  6. PID: the Pathway Interaction Database

    PubMed Central

    Schaefer, Carl F.; Anthony, Kira; Krupa, Shiva; Buchoff, Jeffrey; Day, Matthew; Hannay, Timo; Buetow, Kenneth H.

    2009-01-01

    The Pathway Interaction Database (PID, http://pid.nci.nih.gov) is a freely available collection of curated and peer-reviewed pathways composed of human molecular signaling and regulatory events and key cellular processes. Created in a collaboration between the US National Cancer Institute and Nature Publishing Group, the database serves as a research tool for the cancer research community and others interested in cellular pathways, such as neuroscientists, developmental biologists and immunologists. PID offers a range of search features to facilitate pathway exploration. Users can browse the predefined set of pathways or create interaction network maps centered on a single molecule or cellular process of interest. In addition, the batch query tool allows users to upload long list(s) of molecules, such as those derived from microarray experiments, and either overlay these molecules onto predefined pathways or visualize the complete molecular connectivity map. Users can also download molecule lists, citation lists and complete database content in extensible markup language (XML) and Biological Pathways Exchange (BioPAX) Level 2 format. The database is updated with new pathway content every month and supplemented by specially commissioned articles on the practical uses of other relevant online tools. PMID:18832364

  7. ppdb: a plant promoter database

    PubMed Central

    Yamamoto, Yoshiharu Y.; Obokata, Junichi

    2008-01-01

    ppdb (http://www.ppdb.gene.nagoya-u.ac.jp) is a plant promoter database that provides promoter annotation of Arabidopsis and rice. The database contains information on promoter structures, transcription start sites (TSSs) that have been identified from full-length cDNA clones and also a vast amount of TSS tag data. In ppdb, the promoter structures are determined by sets of promoter elements identified by a position-sensitive extraction method called local distribution of short sequences (LDSS). By using this database, the core promoter structure, the presence of regulatory elements and the distribution of TSS clusters can be identified. Although no differentiation of promoter architecture among plant species has been reported, there is some divergence of utilized sequences for promoter elements. Therefore, ppdb is based on species-specific sets of promoter elements, rather than on general motifs for multiple species. Each regulatory sequence is hyperlinked to literary information, a PLACE entry served by a plant cis-element database, and a list of promoters containing the regulatory sequence. PMID:17947329

  8. ppdb: a plant promoter database.

    PubMed

    Yamamoto, Yoshiharu Y; Obokata, Junichi

    2008-01-01

    ppdb (http://www.ppdb.gene.nagoya-u.ac.jp) is a plant promoter database that provides promoter annotation of Arabidopsis and rice. The database contains information on promoter structures, transcription start sites (TSSs) that have been identified from full-length cDNA clones and also a vast amount of TSS tag data. In ppdb, the promoter structures are determined by sets of promoter elements identified by a position-sensitive extraction method called local distribution of short sequences (LDSS). By using this database, the core promoter structure, the presence of regulatory elements and the distribution of TSS clusters can be identified. Although no differentiation of promoter architecture among plant species has been reported, there is some divergence of utilized sequences for promoter elements. Therefore, ppdb is based on species-specific sets of promoter elements, rather than on general motifs for multiple species. Each regulatory sequence is hyperlinked to literary information, a PLACE entry served by a plant cis-element database, and a list of promoters containing the regulatory sequence. PMID:17947329

  9. Interactive bibliographical database on color

    NASA Astrophysics Data System (ADS)

    Caivano, Jose L.

    2002-06-01

    The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.

  10. Database Transformations for Biological Applications

    SciTech Connect

    Overton, C.; Davidson, S. B.; Buneman, P.; Tannen, V.

    2001-04-11

    The goal of this project was to develop tools to facilitate data transformations between heterogeneous data sources found throughout biomedical applications. Such transformations are necessary when sharing data between different groups working on related problems as well as when querying data spread over different databases, files and software analysis packages.

  11. REFEREE: BIBLIOGRAPHIC DATABASE MANAGER, DOCUMENTATION

    EPA Science Inventory

    The publication is the user's manual for 3.xx releases of REFEREE, a general-purpose bibliographic database management program for IBM-compatible microcomputers. The REFEREE software also is available from NTIS. The manual has two main sections--Quick Tour and References Guide--a...

  12. Constructing Databases--Professional Issues.

    ERIC Educational Resources Information Center

    Moulton, Lynda W.

    1987-01-01

    Outlines the process involved in selecting or developing software for building a machine-readable database and the special librarian's role in that process. The following steps are identified: (1) needs assessment; (2) project cost and justification; (3) software selection; (4) implementation; and (5) startup and maintenance. Twelve references are…

  13. Open access intrapartum CTG database

    PubMed Central

    2014-01-01

    Background Cardiotocography (CTG) is a monitoring of fetal heart rate and uterine contractions. Since 1960 it is routinely used by obstetricians to assess fetal well-being. Many attempts to introduce methods of automatic signal processing and evaluation have appeared during the last 20 years, however still no significant progress similar to that in the domain of adult heart rate variability, where open access databases are available (e.g. MIT-BIH), is visible. Based on a thorough review of the relevant publications, presented in this paper, the shortcomings of the current state are obvious. A lack of common ground for clinicians and technicians in the field hinders clinically usable progress. Our open access database of digital intrapartum cardiotocographic recordings aims to change that. Description The intrapartum CTG database consists in total of 552 intrapartum recordings, which were acquired between April 2010 and August 2012 at the obstetrics ward of the University Hospital in Brno, Czech Republic. All recordings were stored in electronic form in the OB TraceVue®;system. The recordings were selected from 9164 intrapartum recordings with clinical as well as technical considerations in mind. All recordings are at most 90 minutes long and start a maximum of 90 minutes before delivery. The time relation of CTG to delivery is known as well as the length of the second stage of labor which does not exceed 30 minutes. The majority of recordings (all but 46 cesarean sections) is – on purpose – from vaginal deliveries. All recordings have available biochemical markers as well as some more general clinical features. Full description of the database and reasoning behind selection of the parameters is presented in the paper. Conclusion A new open-access CTG database is introduced which should give the research community common ground for comparison of results on reasonably large database. We anticipate that after reading the paper, the reader will understand the context of the field from clinical and technical perspectives which will enable him/her to use the database and also understand its limitations. PMID:24418387

  14. Toward An Unstructured Mesh Database

    NASA Astrophysics Data System (ADS)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi-incidence relationships. We instrument ImG model with sets of optional and application-specific constraints which can be used to check validity of meshes for a specific class of object such as manifold, pseudo-manifold, and simplicial manifold. We conducted experiments to measure the performance of the graph database solution in processing mesh queries and compare it with GrAL mesh library and PostgreSQL database on synthetic and real mesh datasets. The experiments show that each system perform well on specific types of mesh queries, e.g., graph databases perform well on global path-intensive queries. In the future, we investigate database operations for the ImG model and design a mesh query language.

  15. The CARLSBAD Database: A Confederated Database of Chemical Bioactivities

    PubMed Central

    Mathias, Stephen L.; Hines-Kay, Jarrett; Yang, Jeremy J.; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G.; Ursu, Oleg; Oprea, Tudor I.

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical–protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical–protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure–target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure–target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. ‘drugs’). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD database supports a knowledge mining system that provides non-specialists with novel integrative ways of exploring chemical biology space to facilitate knowledge mining in drug discovery and repurposing. Database URL: http://carlsbad.health.unm.edu/carlsbad/. PMID:23794735

  16. Databases in geohazard science: An introduction

    NASA Astrophysics Data System (ADS)

    Klose, Martin; Damm, Bodo; Highland, Lynn M.

    2015-11-01

    The key to understanding hazards is to track, record, and analyse them. Geohazard databases play a critical role in each of these steps. As systematically compiled data archives of past and current hazard events, they generally fall in two categories (Tschoegl et al., 2006; UN-BCPR, 2013): (i) natural disaster databases that cover all types of hazards, most often at a continental or global scale (ADCR, 2015; CRED, 2015; Munich Re, 2015), and (ii) type-specific databases for a certain type of hazard, for example, earthquakes (Schulte and Mooney, 2005; Daniell et al., 2011), tsunami (NGDC/WDC, 2015), or volcanic eruptions (Witham, 2005; Geyer and Martí, 2008). With landslides being among the world's most frequent hazard types (Brabb, 1991; Nadim et al., 2006; Alcántara-Ayala, 2014), symbolizing the complexity of Earth system processes (Korup, 2012), the development of landslide inventories occupies centre stage since many years, especially in applied geomorphology (Alexander, 1991; Oya, 2001). As regards the main types of landslide inventories, a distinction is made between event-based and historical inventories (Hervás and Bobrowsky, 2009; Hervás, 2013). Inventories providing data on landslides caused by a single triggering event, for instance, an earthquake, a rainstorm, or a rapid snowmelt, are essential for exploring root causes in terms of direct system responses or cascades of hazards (Malamud et al., 2004; Mondini et al., 2014). Alternatively, historical inventories, which are more common than their counterparts, constitute a pool of data on landslides that occurred in a specific area at local, regional, national, or even global scale over time (Dikau et al., 1996; Guzzetti et al., 2012; Wood et al., 2015).

  17. Role of exposure databases in risk assessment

    SciTech Connect

    Graham, J.; Walker, K.D.; Berry, M.; Bryan, E.F.; Callahan, M.A.; Fan, A.; Finley, B.; Lynch, J.; McKone, T.; Ozkaynak, H. )

    1992-11-01

    Risk assessments have assumed an increasingly important role in the management of risks in this country. The determination of which pollutants or public health issues are to be regulated, the degree and extent of regulation, and the priority assigned to particular problems are all areas of risk assessment that influence the country's $100 billion annual investment in environmental protection. Recent trends in public policy have brought the practice of risk assessment under greater scrutiny. As policy makers increasingly insist that specific numerical risk levels (so-called bright lines) be incorporated into regulatory decisions, the stakes for good risk assessment practice, already high, are raised even further. Enhancing the scientific basis of risk assessments was a major goal of the Workshop on Exposure Databases. In this article, we present the Risk Assessment Work Group's evaluation of the use of exposurerelated databases in risk assessment and the group's recommendations for improvement. The work group's discussion focused on the availability, suitability, and quality of data that underly exposure assessments, a critical component of risk assessment. The work group established a framework for evaluation, based on exposure scenarios typically used in regulatory decisions. The scenarios included examples from Superfund, the Clean Air Act, the Toxic Substances Control Act, and other regulatory programs. These scenarios were used to illustrate current use of exposure data, to highlight gaps in existing data sources, and to discuss how improved exposure information can improve risk assessments. The work group concluded that many of the databases available are designed for purposes that do not meet exposure and risk assessment needs.

  18. The EXOSAT database and archive

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.; Parmar, A. N.

    1992-01-01

    The EXOSAT database provides on-line access to the results and data products (spectra, images, and lightcurves) from the EXOSAT mission as well as access to data and logs from a number of other missions (such as EINSTEIN, COS-B, ROSAT, and IRAS). In addition, a number of familiar optical, infrared, and x ray catalogs, including the Hubble Space Telescope (HST) guide star catalog are available. The complete database is located at the EXOSAT observatory at ESTEC in the Netherlands and is accessible remotely via a captive account. The database management system was specifically developed to efficiently access the database and to allow the user to perform statistical studies on large samples of astronomical objects as well as to retrieve scientific and bibliographic information on single sources. The system was designed to be mission independent and includes timing, image processing, and spectral analysis packages as well as software to allow the easy transfer of analysis results and products to the user's own institute. The archive at ESTEC comprises a subset of the EXOSAT observations, stored on magnetic tape. Observations of particular interest were copied in compressed format to an optical jukebox, allowing users to retrieve and analyze selected raw data entirely from their terminals. Such analysis may be necessary if the user's needs are not accommodated by the products contained in the database (in terms of time resolution, spectral range, and the finesse of the background subtraction, for instance). Long-term archiving of the full final observation data is taking place at ESRIN in Italy as part of the ESIS program, again using optical media, and ESRIN have now assumed responsibility for distributing the data to the community. Tests showed that raw observational data (typically several tens of megabytes for a single target) can be transferred via the existing networks in reasonable time.

  19. Feasibility of combining two aquatic benthic macroinvertebrate community databases for water-quality assessment

    USGS Publications Warehouse

    Lenz, Bernard N.

    1997-01-01

    An important part of the U.S. Geological Survey's (USGS) National Water-Quality Assessment (NAWQA) Program is the analysis of existing data in each of the NAWQA study areas. The Wisconsin Department of Natural Resources (WDNR) has an extensive aquatic benthic macroinvertebrate communities in streams (benthic invertebrates) database maintained by the University of Wisconsin-Stevens Point. This database has data which date back to 1984 and includes data from streams within the Western Lake Michigan Drainages (WMIC) study area (fig. 1). This report looks at the feasibility of USGS scientists supplementing the data they collect with data from the WDNR database when assessing water quality in the study area.

  20. Database for Assessment Unit-Scale Analogs (Exclusive of the United States)

    USGS Publications Warehouse

    Charpentier, Ronald R.; Klett, T.R.; Attanasi, E.D.

    2008-01-01

    This publication presents a database of geologic analogs useful for the assessment of undiscovered oil and gas resources. Particularly in frontier areas, where few oil and gas fields have been discovered, assessment methods such as discovery process models may not be usable. In such cases, comparison of the assessment area to geologically similar but more maturely explored areas may be more appropriate. This analog database consists of 246 assessment units, based on the U.S. Geological Survey 2000 World Petroleum Assessment. Besides geologic data to facilitate comparisons, the database includes data pertaining to numbers and sizes of oil and gas fields and the properties of their produced fluids.

  1. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also beyond the scope of this report to measure all slipfaces. We attempted to include enough slipface measurements to represent the general circulation (as implied by gross dune morphology) and to give a sense of the complex nature of aeolian activity on Mars. The absence of slipface measurements in a given direction should not be taken as evidence that winds in that direction did not occur. When a dune field was located within a crater, the azimuth from crater centroid to dune field centroid was calculated, as another possible indicator of wind direction. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA) images that were used to build the database. The database is presented in a variety of formats. It is presented as an ArcReader project which can be opened using the free ArcReader software. The latest version of ArcReader can be downloaded at http://www.esri.com/software/arcgis/arcreader/download.html. The database is also presented in an ArcMap project. The ArcMap project allows fuller use of the data, but requires ESRI ArcMap(Registered) software. A fuller description of the projects can be found in the NP_Dunes_ReadMe file (NP_Dunes_ReadMe folder_ and the NP_Dunes_ReadMe_GIS file (NP_Documentation folder). For users who prefer to create their own projects, the data are available in ESRI shapefile and geodatabase formats, as well as the open Geography Markup Language (GML) format. A printable map of the dunes and craters in the database is available as a Portable Document Format (PDF) document. The map is also included as a JPEG file. (NP_Documentation folder) Documentation files are available in PDF and ASCII (.txt) files. Tables are available in both Excel and ASCII (.txt)

  2. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, S. H. S.

    1986-01-01

    The purpose of the ORACLE interface is to enable the DAVID program to submit queries and transactions to databases running under the ORACLE DBMS. The interface package is made up of several modules. The progress of these modules is described below. The two approaches used in implementing the interface are also discussed. Detailed discussion of the design of the templates is shown and concluding remarks are presented.

  3. Data-Based Decisions Guidelines for Teachers of Students with Severe Intellectual and Developmental Disabilities

    ERIC Educational Resources Information Center

    Jimenez, Bree A.; Mims, Pamela J.; Browder, Diane M.

    2012-01-01

    Effective practices in student data collection and implementation of data-based instructional decisions are needed for all educators, but are especially important when students have severe intellectual and developmental disabilities. Although research in the area of data-based instructional decisions for students with severe disabilities shows…

  4. Construction of an integrated database to support genomic sequence analysis

    SciTech Connect

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  5. Links in a distributed database: Theory and implementation

    SciTech Connect

    Karonis, N.T.; Kraimer, M.R.

    1991-12-01

    This document addresses the problem of extending database links across Input/Output Controller (IOC) boundaries. It lays a foundation by reviewing the current system and proposing an implementation specification designed to guide all work in this area. The document also describes an implementation that is less ambitious than our formally stated proposal, one that does not extend the reach of all database links across IOC boundaries. Specifically, it introduces an implementation of input and output links and comments on that overall implementation. We include a set of manual pages describing each of the new functions the implementation provides.

  6. The Condensate Database for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Gallaher, D. W.; Lv, Q.; Grant, G.; Campbell, G. G.; Liu, Q.

    2014-12-01

    Although massive amounts of cryospheric data have been and are being generated at an unprecedented rate, a vast majority of the otherwise valuable data have been ``sitting in the dark'', with very limited quality assurance or runtime access for higher-level data analytics such as anomaly detection. This has significantly hindered data-driven scientific discovery and advances in the polar research and Earth sciences community. In an effort to solve this problem we have investigated and developed innovative techniques for the construction of ``condensate database'', which is much smaller than the original data yet still captures the key characteristics (e.g., spatio-temporal norm and changes). In addition we are taking advantage of parallel databases that make use of low cost GPU processors. As a result, efficient anomaly detection and quality assurance can be achieved with in-memory data analysis or limited I/O requests. The challenges lie in the fact that cryospheric data are massive and diverse, with normal/abnomal patterns spanning a wide range of spatial and temporal scales. This project consists of investigations in three main areas: (1) adaptive neighborhood-based thresholding in both space and time; (2) compressive-domain pattern detection and change analysis; and (3) hybrid and adaptive condensation of multi-modal, multi-scale cryospheric data.

  7. Antarctic Tephra Database (AntT)

    NASA Astrophysics Data System (ADS)

    Kurbatov, A.; Dunbar, N. W.; Iverson, N. A.; Gerbi, C. C.; Yates, M. G.; Kalteyer, D.; McIntosh, W. C.

    2014-12-01

    Modern paleoclimate research is heavily dependent on establishing accurate timing related to rapid shifts in Earth's climate system. The ability to correlate these events at local, and ideally at the intercontinental scales, allows assessment, for example, of phasing or changes in atmospheric circulation. Tephra-producing volcanic eruptions are geologically instantaneous events that are largely independent of climate. We have developed a tephrochronological framework for paleoclimate research in Antarctic in a user friendly, freely accessible online Antarctic tephra (AntT) database (http://cci.um.maine.edu/AntT/). Information about volcanic events, including physical and geochemical characteristics of volcanic products collected from multiple data sources, are integrated into the AntT database.The AntT project establishes a new centralized data repository for Antarctic tephrochronology, which is needed for precise correlation of records between Antarctic ice cores (e.g. WAIS Divide, RICE, Talos Dome, ITASE) and global paleoclimate archives. The AntT will help climatologists, paleoclimatologists, atmospheric chemists, geochemists, climate modelers synchronize paleoclimate archives using volcanic products that establishing timing of climate events in different geographic areas, climate-forcing mechanisms, natural threshold levels in the climate system. All these disciplines will benefit from accurate reconstructions of the temporal and spatial distribution of past rapid climate change events in continental, atmospheric, marine and polar realms. Research is funded by NSF grants: ANT-1142007 and 1142069.

  8. Ontology building by dictionary database mining

    NASA Astrophysics Data System (ADS)

    Deliyska, B.; Rozeva, A.; Malamov, D.

    2012-11-01

    The paper examines the problem of building ontologies in automatic and semi-automatic way by means of mining a dictionary database. An overview of data mining tools and methods is presented. On this basis an extended and improved approach is proposed which involves operations for pre-processing the dictionary database, clustering and associating database entries for extracting hierarchical and nonhierarchical relations. The approach is applied on sample dictionary database in the environment of the Rapid Miner mining tool. As a result the dictionary database is complemented to thesaurus database which can be further on easily converted to reusable formal ontology.

  9. Workshop discusses database for Marcellus water issues

    NASA Astrophysics Data System (ADS)

    Brantley, Susan L.; Wilderman, Candie; Abad, Jorge

    2012-08-01

    ShaleNetwork 2012 Workshop; State College, Pennsylvania, 23-24 April 2012 The largest source of natural gas in the United States, the Marcellus shale, underlies a 95,000-square-mile area from Virginia to New York and from Ohio to Pennsylvania. Since 2005, about 5000 wells have been drilled in Pennsylvania alone, and about 2500 of these are now producing gas. While many welcome the shale gas jobs, others worry about environmental impacts. A workshop was convened at Pennsylvania State University to coordinate the collection of data for water quality and quantity in regions of hydrofracturing. The purpose of the event was to encourage participants to use and contribute data to a growing database of water quality and quantity for regions of shale gas development (www.shalenetwork.org).

  10. CRITTER: A database for managing research animals

    PubMed Central

    Lees, V. Wayne; Lukey, Claire; Orr, Richard

    1993-01-01

    We describe CRITTER, a computer database program for managing research animals. We designed it especially for institutions which operate health surveillance plans, such as specific pathogen-free schemes. Because CRITTER can be used to record any type of test result in any species of animal, it can be customized to suit each institution and its management protocol. In addition to maintaining a current inventory of each individual animal and its location, the program retains historical information on those that have been removed from the colony. Output summaries are generated by selecting from a menu of standard reports or by designing a custom query. Although CRITTER has been designed for individual research establishments, it could be modified for use in area health surveillance programs. CRITTER operates on IBM compatible computers using a menu-driven, runtime version of Paradox. PMID:17424142

  11. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  12. The MAJORANA Parts Tracking Database

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  13. The National Land Cover Database

    USGS Publications Warehouse

    Homer, Collin H.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  14. Spectroscopic Databases for Astronomical Applications

    NASA Astrophysics Data System (ADS)

    Brown, L. R.

    2011-05-01

    Astronomers detect new species (atoms, molecules, ions, radicals present in gas, liquid and solid phase) and determine their abundances, temperatures, pressures, velocities etc. through spectroscopic remote sensing. Nearly every physical phenomenon that in uences the radiative transfer of an astronomical body can be detected and quantified using specific spectral features, provided sufficient spectroscopic knowledge is available. Collections of spectroscopic information are formed and then revised as new objectives and techniques evolve. The resulting spectroscopic databases should be complete, accurate and organized in convenient forms. Much is accessible for far- and mid-IR applications, but the available compilations are often deficient at shorter wavelengths. In this presentation, the current status of these molecular spectroscopic databases will be reviewed.

  15. Aero/fluids database system

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Violett, Duane L., Jr.

    1991-01-01

    The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.

  16. Stockpile Dismantlement Database Training Materials

    SciTech Connect

    Not Available

    1993-11-01

    This document, the Stockpile Dismantlement Database (SDDB) training materials is designed to familiarize the user with the SDDB windowing system and the data entry steps for Component Characterization for Disposition. The foundation of information required for every part is depicted by using numbered graphic and text steps. The individual entering data is lead step by step through generic and specific examples. These training materials are intended to be supplements to individual on-the-job training.

  17. The RECONS 25 Parsec Database

    NASA Astrophysics Data System (ADS)

    Henry, Todd J.; Jao, Wei-Chun; Pewett, Tiffany; Riedel, Adric R.; Silverstein, Michele L.; Slatten, Kenneth J.; Winters, Jennifer G.; Recons Team

    2015-01-01

    The REsearch Consortium On Nearby Stars (RECONS, www.recons.org) Team has been mapping the solar neighborhood since 1994. Nearby stars provide the fundamental framework upon which all of stellar astronomy is based, both for individual stars and stellar populations. The nearest stars are also the primary targets for extrasolar planet searches, and will undoubtedly play key roles in understanding the prevalence and structure of solar systems, and ultimately, in our search for life elsewhere.We have built the RECONS 25 Parsec Database to encourage and enable exploration of the Sun's nearest neighbors. The Database, slated for public release in 2015, contains 3088 stars, brown dwarfs, andexoplanets in 2184 systems as of October 1, 2014. All of these systems have accurate trigonometric parallaxes in the refereed literature placing them closer than 25.0 parsecs, i.e., parallaxes greater than 40 mas with errors less than 10 mas. Carefully vetted astrometric, photometric, and spectroscopic data are incorporated intothe Database from reliable sources, including significant original data collected by members of the RECONS Team.Current exploration of the solar neighborhood by RECONS, enabled by the Database, focuses on the ubiquitous red dwarfs, including: assessing the stellar companion population of ~1200 red dwarfs (Winters), investigating the astrophysical causes that spread red dwarfs of similar temperatures by a factor of 16 in luminosity (Pewett), and canvassing ~3000 red dwarfs for excess emission due to unseen companions and dust (Silverstein). In addition, a decade long astrometric survey of ~500 red dwarfs in the southern sky has begun, in an effort to understand the stellar, brown dwarf, and planetary companion populations for the stars that make up at least 75% of all stars in the Universe.This effort has been supported by the NSF through grants AST-0908402, AST-1109445, and AST-1412026, and via observations made possible by the SMARTS Consortium.

  18. LSD: Large Survey Database framework

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  19. GOLD: The Genomes Online Database

    DOE Data Explorer

    Kyrpides, Nikos; Liolios, Dinos; Chen, Amy; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor; Bernal, Alex

    Since its inception in 1997, GOLD has continuously monitored genome sequencing projects worldwide and has provided the community with a unique centralized resource that integrates diverse information related to Archaea, Bacteria, Eukaryotic and more recently Metagenomic sequencing projects. As of September 2007, GOLD recorded 639 completed genome projects. These projects have their complete sequence deposited into the public archival sequence databases such as GenBank EMBL,and DDBJ. From the total of 639 complete and published genome projects as of 9/2007, 527 were bacterial, 47 were archaeal and 65 were eukaryotic. In addition to the complete projects, there were 2158 ongoing sequencing projects. 1328 of those were bacterial, 59 archaeal and 771 eukaryotic projects. Two types of metadata are provided by GOLD: (i) project metadata and (ii) organism/environment metadata. GOLD CARD pages for every project are available from the link of every GOLD_STAMP ID. The information in every one of these pages is organized into three tables: (a) Organism information, (b) Genome project information and (c) External links. [The Genomes On Line Database (GOLD) in 2007: Status of genomic and metagenomic projects and their associated metadata, Konstantinos Liolios, Konstantinos Mavromatis, Nektarios Tavernarakis and Nikos C. Kyrpides, Nucleic Acids Research Advance Access published online on November 2, 2007, Nucleic Acids Research, doi:10.1093/nar/gkm884]

    The basic tables in the GOLD database that can be browsed or searched include the following information:

    • Gold Stamp ID
    • Organism name
    • Domain
    • Links to information sources
    • Size and link to a map, when available
    • Chromosome number, Plas number, and GC content
    • A link for downloading the actual genome data
    • Institution that did the sequencing
    • Funding source
    • Database where information resides
    • Publication status and information

    (Specialized Interface)

  20. Central Asia Active Fault Database

    NASA Astrophysics Data System (ADS)

    Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

    2014-05-01

    The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late Pleistocene landforms observed near the fault trace.

  1. Long Valley caldera GIS Database

    NASA Astrophysics Data System (ADS)

    Williams, M. J.; Battaglia, M.; Hill, D.; Langbein, J.; Segall, P.

    2002-12-01

    In May of 1980, a strong earthquake swarm that included four magnitude 6 earthquakes struck the southern margin of Long Valley Caldera associated with a 25-cm, dome-shaped uplift of the caldera floor. These events marked the onset of the latest period of caldera unrest that continues to this day. This ongoing unrest includes recurring earthquake swarms and continued dome-shaped uplift of the central section of the caldera (the resurgent dome) accompanied by changes in thermal springs and gas emissions. Analysis of combined gravity and geodetic data confirms the intrusion of silicic magma beneath Long Valley caldera. In 1982, the U.S. Geological Survey under the Volcano Hazards Program began an intensive effort to monitor and study geologic unrest in Long Valley Caldera. This database provides an overview of the studies being conducted by the Long Valley Observatory in Eastern California from 1975 to 2000. The database includes geological, monitoring and topographic datasets related to the Long Valley Caldera, plus a number of USGS publications on Long Valley (e.g., fact-sheets, references). Datasets are available as text files or ArcView shapefiles. Database CD-ROM Table of Contents: - Geological data (digital geologic map) - Monitoring data: Deformation (EDM, GPS, Leveling); Earthquakes; Gravity; Hydrologic; CO2 - Topographic data: DEM, DRG, Landsat 7, Rivers, Roads, Water Bodies - ArcView Project File

  2. Development a GIS Snowstorm Database

    NASA Astrophysics Data System (ADS)

    Squires, M. F.

    2010-12-01

    This paper describes the development of a GIS Snowstorm Database (GSDB) at NOAA’s National Climatic Data Center. The snowstorm database is a collection of GIS layers and tabular information for 471 snowstorms between 1900 and 2010. Each snowstorm has undergone automated and manual quality control. The beginning and ending date of each snowstorm is specified. The original purpose of this data was to serve as input for NCDC’s new Regional Snowfall Impact Scale (ReSIS). However, this data is being preserved and used to investigate the impacts of snowstorms on society. GSDB is used to summarize the impact of snowstorms on transportation (interstates) and various classes of facilities (roads, schools, hospitals, etc.). GSDB can also be linked to other sources of impacts such as insurance loss information and Storm Data. Thus the snowstorm database is suited for many different types of users including the general public, decision makers, and researchers. This paper summarizes quality control issues associated with using snowfall data, methods used to identify the starting and ending dates of a storm, and examples of the tables that combine snowfall and societal data.

  3. IDBD: Infectious Disease Biomarker Database

    PubMed Central

    Yang, In Seok; Ryu, Chunsun; Cho, Ki Joon; Kim, Jin Kwang; Ong, Swee Hoe; Mitchell, Wayne P.; Kim, Bong Su; Kim, Kyung Hyun

    2008-01-01

    Biomarkers enable early diagnosis, guide molecularly targeted therapy and monitor the activity and therapeutic responses across a variety of diseases. Despite intensified interest and research, however, the overall rate of development of novel biomarkers has been falling. Moreover, no solution is yet available that efficiently retrieves and processes biomarker information pertaining to infectious diseases. Infectious Disease Biomarker Database (IDBD) is one of the first efforts to build an easily accessible and comprehensive literature-derived database covering known infectious disease biomarkers. IDBD is a community annotation database, utilizing collaborative Web 2.0 features, providing a convenient user interface to input and revise data online. It allows users to link infectious diseases or pathogens to protein, gene or carbohydrate biomarkers through the use of search tools. It supports various types of data searches and application tools to analyze sequence and structure features of potential and validated biomarkers. Currently, IDBD integrates 611 biomarkers for 66 infectious diseases and 70 pathogens. It is publicly accessible at http://biomarker.cdc.go.kr and http://biomarker.korea.ac.kr. PMID:17982173

  4. ERGDB: Estrogen Responsive Genes Database.

    PubMed

    Tang, Suisheng; Han, Hao; Bajic, Vladimir B

    2004-01-01

    ERGDB is an integrated knowledge database dedicated to genes responsive to estrogen. Genes included in ERGDB are those whose expression levels are experimentally proven to be either up-regulated or down-regulated by estrogen. Genes included are identified based on publications from the PubMed database and each record has been manually examined, evaluated and selected for inclusion by biologists. ERGDB aims to be a unified gateway to store, search, retrieve and update information about estrogen responsive genes. Each record contains links to relevant databases, such as GenBank, LocusLink, Refseq, PubMed and ATCC. The unique feature of ERGDB is that it contains information on the dependence of gene reactions on experimental conditions. In addition to basic information about the genes, information for each record includes gene functional description, experimental methods used, tissue or cell type, gene reaction, estrogen exposure time and the summary of putative estrogen response elements if the gene's promoter sequence was available. Through a web interface at http://sdmc.i2r.a-star.edu.sg/ergdb/ cgi-bin/explore.pl users can either browse or query ERGDB. Access is free for academic and non-profit users. PMID:14681475

  5. The Chordate Proteome History Database

    PubMed Central

    Levasseur, Anthony; Paganini, Julien; Dainat, Jacques; Thompson, Julie D.; Poch, Olivier; Pontarotti, Pierre; Gouret, Philippe

    2012-01-01

    The chordate proteome history database (http://ioda.univ-provence.fr) comprises some 20,000 evolutionary analyses of proteins from chordate species. Our main objective was to characterize and study the evolutionary histories of the chordate proteome, and in particular to detect genomic events and automatic functional searches. Firstly, phylogenetic analyses based on high quality multiple sequence alignments and a robust phylogenetic pipeline were performed for the whole protein and for each individual domain. Novel approaches were developed to identify orthologs/paralogs, and predict gene duplication/gain/loss events and the occurrence of new protein architectures (domain gains, losses and shuffling). These important genetic events were localized on the phylogenetic trees and on the genomic sequence. Secondly, the phylogenetic trees were enhanced by the creation of phylogroups, whereby groups of orthologous sequences created using OrthoMCL were corrected based on the phylogenetic trees; gene family size and gene gain/loss in a given lineage could be deduced from the phylogroups. For each ortholog group obtained from the phylogenetic or the phylogroup analysis, functional information and expression data can be retrieved. Database searches can be performed easily using biological objects: protein identifier, keyword or domain, but can also be based on events, eg, domain exchange events can be retrieved. To our knowledge, this is the first database that links group clustering, phylogeny and automatic functional searches along with the detection of important events occurring during genome evolution, such as the appearance of a new domain architecture. PMID:22904610

  6. MINT: a Molecular INTeraction database.

    PubMed

    Zanzoni, Andreas; Montecchi-Palazzi, Luisa; Quondam, Michele; Ausiello, Gabriele; Helmer-Citterich, Manuela; Cesareni, Gianni

    2002-02-20

    Protein interaction databases represent unique tools to store, in a computer readable form, the protein interaction information disseminated in the scientific literature. Well organized and easily accessible databases permit the easy retrieval and analysis of large interaction data sets. Here we present MINT, a database (http://cbm.bio.uniroma2.it/mint/index.html) designed to store data on functional interactions between proteins. Beyond cataloguing binary complexes, MINT was conceived to store other types of functional interactions, including enzymatic modifications of one of the partners. Release 1.0 of MINT focuses on experimentally verified protein-protein interactions. Both direct and indirect relationships are considered. Furthermore, MINT aims at being exhaustive in the description of the interaction and, whenever available, information about kinetic and binding constants and about the domains participating in the interaction is included in the entry. MINT consists of entries extracted from the scientific literature by expert curators assisted by 'MINT Assistant', a software that targets abstracts containing interaction information and presents them to the curator in a user-friendly format. The interaction data can be easily extracted and viewed graphically through 'MINT Viewer'. Presently MINT contains 4568 interactions, 782 of which are indirect or genetic interactions. PMID:11911893

  7. The IMGT/HLA database.

    PubMed

    Robinson, James; Marsh, Steven G E

    2007-01-01

    The human leukocyte antigen (HLA) complex is located within the 6p21.3 region on the short arm of human chromosome 6 and contains more than 220 genes of diverse function. Many of the genes encode proteins of the immune system and include many highly polymorphic HLA genes. The naming of new HLA genes and allele sequences and their quality control is the responsibility of the WHO Nomenclature Committee for Factors of the HLA System. The IMGT/HLA Database acts as the repository for these sequences and is recognized as the primary source of up-to-date and accurate HLA sequences. The IMGT/HLA website provides a number of tools for accessing the database: these include allele reports, sequence alignments, and sequence similarity searches. The website is updated every 3 months with all the new and confirmatory sequences submitted to the WHO Nomenclature Committee. Submission of HLA sequences to the committee is possible through the tools provided by the IMGT/HLA Database. PMID:18449991

  8. ABCD: a functional database for the avian brain.

    PubMed

    Schrott, Aniko; Kabai, Peter

    2008-01-30

    Here we present the first database developed for storing, retrieving and cross-referencing neuroscience information about the connectivity of the avian brain. The Avian Brain Circuitry Database (ABCD) contains entries about the new and old terminology of the areas and their hierarchy, data on connections between brain regions, as well as a functional keyword system linked to brain regions and connections. Data were collected from the primary literature and textbooks, and an online submission system was developed to facilitate further data collection directly from researchers. The database aims to help spread the results of avian connectivity studies, the recently revised nomenclature and also to provide data for brain network research. ABCD is freely available at http://www.behav.org/abcd. PMID:17889371

  9. View Discovery in OLAP Databases through Statistical Combinatorial Optimization

    SciTech Connect

    Joslyn, Cliff A.; Burke, Edward J.; Critchlow, Terence J.

    2009-05-01

    The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of ``views'' of an OLAP database as a combinatorial object of all projections and subsets, and ``view discovery'' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline ``hop-chaining'' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a ``spiraling'' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  10. Karst database development in Minnesota: Design and data assembly

    USGS Publications Warehouse

    Gao, Y.; Alexander, E.C., Jr.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  11. Municipal GIS incorporates database from pipe lines

    SciTech Connect

    Not Available

    1994-05-01

    League City, a coastal area community of about 35,000 population in Galveston County, Texas, has developed an impressive municipal GIS program. The system represents a textbook example of what a municipal GIS can represent and produce. In 1987, the city engineer was authorized to begin developing the area information system. City survey personnel used state-of-the-art Global Positioning System (GPS) technology to establish a first order monumentation program with a grid of 78 monuments set over 54 sq mi. Street, subdivision, survey, utilities, taxing criteria, hydrology, topography, environmental and other concerns were layered into the municipal GIS database program. Today, area developers submit all layout, design, and land use plan data to the city in digital format without hard copy. Multi-color maps with high resolution graphics can be quickly generate for cross-referenced queries sensitive to political, environmental, engineering, taxing, and/or utility capacity jurisdictions. The design of both the GIS and data base system are described.

  12. NORPERM, the Norwegian Permafrost Database - a TSP NORWAY IPY legacy

    NASA Astrophysics Data System (ADS)

    Juliussen, H.; Christiansen, H. H.; Strand, G. S.; Iversen, S.; Midttømme, K.; Rønning, J. S.

    2010-10-01

    NORPERM, the Norwegian Permafrost Database, was developed at the Geological Survey of Norway during the International Polar Year (IPY) 2007-2009 as the main data legacy of the IPY research project Permafrost Observatory Project: A Contribution to the Thermal State of Permafrost in Norway and Svalbard (TSP NORWAY). Its structural and technical design is described in this paper along with the ground temperature data infrastructure in Norway and Svalbard, focussing on the TSP NORWAY permafrost observatory installations in the North Scandinavian Permafrost Observatory and Nordenskiöld Land Permafrost Observatory, being the primary data providers of NORPERM. Further developments of the database, possibly towards a regional database for the Nordic area, are also discussed. The purpose of NORPERM is to store ground temperature data safely and in a standard format for use in future research. The IPY data policy of open, free, full and timely release of IPY data is followed, and the borehole metadata description follows the Global Terrestrial Network for Permafrost (GTN-P) standard. NORPERM is purely a temperature database, and the data is stored in a relation database management system and made publically available online through a map-based graphical user interface. The datasets include temperature time series from various depths in boreholes and from the air, snow cover, ground-surface or upper ground layer recorded by miniature temperature data-loggers, and temperature profiles with depth in boreholes obtained by occasional manual logging. All the temperature data from the TSP NORWAY research project is included in the database, totalling 32 temperature time series from boreholes, 98 time series of micrometeorological temperature conditions, and 6 temperature depth profiles obtained by manual logging in boreholes. The database content will gradually increase as data from previous and future projects are added. Links to near real-time permafrost temperatures, obtained by GSM data transfer, is also provided through the user interface.

  13. Active fault database of Japan: Its construction and search system

    NASA Astrophysics Data System (ADS)

    Yoshioka, T.; Miyamoto, F.

    2011-12-01

    The Active fault database of Japan was constructed by the Active Fault and Earthquake Research Center, GSJ/AIST and opened to the public on the Internet from 2005 to make a probabilistic evaluation of the future faulting event and earthquake occurrence on major active faults in Japan. The database consists of three sub-database, 1) sub-database on individual site, which includes long-term slip data and paleoseismicity data with error range and reliability, 2) sub-database on details of paleoseismicity, which includes the excavated geological units and faulting event horizons with age-control, 3) sub-database on characteristics of behavioral segments, which includes the fault-length, long-term slip-rate, recurrence intervals, most-recent-event, slip per event and best-estimate of cascade earthquake. Major seismogenic faults, those are approximately the best-estimate segments of cascade earthquake, each has a length of 20 km or longer and slip-rate of 0.1m/ky or larger and is composed from about two behavioral segments in average, are included in the database. This database contains information of active faults in Japan, sorted by the concept of "behavioral segments" (McCalpin, 1996). Each fault is subdivided into 550 behavioral segments based on surface trace geometry and rupture history revealed by paleoseismic studies. Behavioral segments can be searched on the Google Maps. You can select one behavioral segment directly or search segments in a rectangle area on the map. The result of search is shown on a fixed map or the Google Maps with information of geologic and paleoseismic parameters including slip rate, slip per event, recurrence interval, and calculated rupture probability in the future. Behavioral segments can be searched also by name or combination of fault parameters. All those data are compiled from journal articles, theses, and other documents. We are currently developing a revised edition, which is based on an improved database system. More than ten thousands locality data of investigation sites such as the longitude and latitude, research method, displacement, age of paleofaulting etc. of each surveying sites are also available on the database. These data can be shown from the result view of the segment search.

  14. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.

  15. Integrated Primary Care Information Database (IPCI)

    Cancer.gov

    The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.

  16. Diet History Questionnaire: Database Revision History

    Cancer.gov

    The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

  17. Reef Ecosystem Services and Decision Support Database

    EPA Science Inventory

    This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...

  18. Investigating Evolutionary Questions Using Online Molecular Databases.

    ERIC Educational Resources Information Center

    Puterbaugh, Mary N.; Burleigh, J. Gordon

    2001-01-01

    Recommends using online molecular databases as teaching tools to illustrate evolutionary questions and concepts while introducing students to public molecular databases. Provides activities in which students make molecular comparisons between species. (YDS)

  19. CANCER PREVENTION AND CONTROL (CP) DATABASE

    EPA Science Inventory

    This database focuses on breast, cervical, skin, and colorectal cancer emphasizing the application of early detection and control program activities and risk reduction efforts. The database provides bibliographic citations and abstracts of various types of materials including jou...

  20. Quantum search of a real unstructured database

    NASA Astrophysics Data System (ADS)

    Broda, Bogusław

    2016-02-01

    A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.

  1. PACSY, a relational database management system for protein structure and chemical shift analysis

    PubMed Central

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo

    2012-01-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636

  2. The GIOD Project — Globally Interconnected Object Databases

    NASA Astrophysics Data System (ADS)

    Bunn, Julian J.; Holtman, Koen; Newman, Harvey B.; Wilkinson, Richard P.

    2001-10-01

    The GIOD (Globally Interconnected Object Databases) Project, a joint effort between Caltech and CERN, funded by Hewlett Packard Corporation, has investigated the use of WAN-distributed Object Databases and Mass Storage systems for LHC data. A prototype small-scale LHC data analysis center has been constructed using computing resources at Caltechs Centre for Advanced Computing Research (CACR). These resources include a 256 CPU HP Exemplar of ˜4600 SPECfp95, a 600 TByte High Performance Storage System (HPSS), and local/wide area links based on OC3 ATM. Using the Exemplar, a large number of fully simulated CMS events were produced, and used to populate an object database with a complete schema for raw, reconstructed and analysis objects. The reconstruction software used for this task was based on early codes developed in preparation for the current CMS reconstruction program, ORCA. Simple analysis software was then developed in Java, and integrated with SLACs Java Analysis Studio tool. An event viewer was constructed with the Java 3D API. Using this suite of software, tests were made in collaboration with researchers at FNAL and SDSC, that focused on distributed access to the database by numerous clients, and measurements of peak bandwidths were made and interpreted. In this paper, some significant findings from the GIOD Project are presented, such as the achievement of the CMS experiment's 100 MB/s database I/O milestone.

  3. Speech Databases of Typical Children and Children with SLI

    PubMed Central

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children’s speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children’s speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children’s speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing. PMID:26963508

  4. [Quality management and participation into clinical database].

    PubMed

    Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi

    2013-07-01

    Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients. PMID:23917137

  5. The European Bioinformatics Institute (EBI) databases.

    PubMed Central

    Rodriguez-Tomé, P; Stoehr, P J; Cameron, G N; Flores, T P

    1996-01-01

    The European Bioinformatics Institute (EBI) maintains and distributes the EMBL Nucleotide Sequence database, Europe's primary nucleotide sequence data resource. The EBI also maintains and distributes the SWISS-PROT Protein Sequence database, in collaboration with Amos Bairoch of the University of Geneva. Over fifty additional specialist molecular biology databases, as well as software and documentation of interest to molecular biologists are available. The EBI network services include database searching and sequence similarity searching facilities. PMID:8594602

  6. An Internet enabled impact limiter material database

    SciTech Connect

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-09-01

    This paper presents a detailed explanation of the construction of an interest enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The technique used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience.

  7. Implementing security on a prototype hospital database.

    PubMed

    Khair, M; Pangalos, G; Andria, F; Bozios, L

    1997-01-01

    This paper describes the methodology used and the experience gained from the application of a new secure database design approach and database security policy in a real life hospital environment. The applicability of the proposed database security policy in a major Greek general hospital is demonstrated. Moreover, the security and quality assurance of the developed prototype secure database is examined, taking into consideration the results from the study of the user acceptance. PMID:10179532

  8. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  9. Database Systems. Course Three. Information Systems Curriculum.

    ERIC Educational Resources Information Center

    O'Neil, Sharon Lund; Everett, Donna R.

    This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…

  10. CLEAN WATER NEEDS SURVEY (CWNS) DATABASE

    EPA Science Inventory

    Resource Purpose:The CWNS Database is a completely new database system that is currently loaded on a UNIX server at RTP, NC. It is absolutely imperative for the states and territories to have access to the CWNS Database to input their updated CWNS 2000 data into the databa...

  11. Automated database design technology and tools

    NASA Technical Reports Server (NTRS)

    Shen, Stewart N. T.

    1988-01-01

    The Automated Database Design Technology and Tools research project results are summarized in this final report. Comments on the state of the art in various aspects of database design are provided, and recommendations made for further research for SNAP and NAVMASSO future database applications.

  12. Annual Review of Database Developments 1991.

    ERIC Educational Resources Information Center

    Basch, Reva

    1991-01-01

    Review of developments in databases highlights a new emphasis on accessibility. Topics discussed include the internationalization of databases; databases that deal with finance, drugs, and toxic waste; access to public records, both personal and corporate; media online; reducing large files of data to smaller, more manageable files; and…

  13. Full-Text Databases in Medicine.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; And Others

    1995-01-01

    Describes types of full-text databases in medicine; discusses features for searching full-text journal databases available through online vendors; reviews research on full-text databases in medicine; and describes the MEDLINE/Full-Text Research Project at the University of Missouri (Columbia) which investigated precision, recall, and relevancy.…

  14. Emission Database for Global Atmospheric Research (EDGAR).

    ERIC Educational Resources Information Center

    Olivier, J. G. J.; And Others

    1994-01-01

    Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)

  15. Selecting Software for a Development Information Database.

    ERIC Educational Resources Information Center

    Geethananda, Hemamalee

    1991-01-01

    Describes software selection criteria considered for use with the bibliographic database of the Development Information Network for South Asia (DEVINSA), which is located in Sri Lanka. Highlights include ease of database creation, database size, input, editing, data validation, inverted files, searching, storing searches, vocabulary control, user…

  16. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  17. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  18. Information Literacy Skills: Comparing and Evaluating Databases

    ERIC Educational Resources Information Center

    Grismore, Brian A.

    2012-01-01

    The purpose of this database comparison is to express the importance of teaching information literacy skills and to apply those skills to commonly used Internet-based research tools. This paper includes a comparison and evaluation of three databases (ProQuest, ERIC, and Google Scholar). It includes strengths and weaknesses of each database based…

  19. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  20. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  1. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  2. Conceptual Design of a Prototype LSST Database

    SciTech Connect

    Nikolaev, S; Huber, M E; Cook, K H; Abdulla, G; Brase, J

    2004-10-07

    This document describes a preliminary design for Prototype LSST Database (LSST DB). They identify key components and data structures and provide an expandable conceptual schema for the database. The authors discuss the potential user applications and post-processing algorithm to interact with the database, and give a set of example queries.

  3. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  4. [DATABASE FOR DEPOSITARY DEPARTMENT OF MICROORGANISMS].

    PubMed

    Brovarnyk, V; Golovach, T M

    2015-01-01

    The database on microorganism culture depositary is designed with using MS Access 2010. Three major modules, namely general description, administration, storage, compound database kernel. Description of information in these modules is given. Web page of the depositary is developed on the database. PMID:26638488

  5. Citation Help in Databases: Helpful or Harmful?

    ERIC Educational Resources Information Center

    Kessler, Jane; Van Ullen, Mary K.

    2006-01-01

    A review of the help files in several major databases revealed that database vendors have begun including information on citing sources, which has the potential to be very useful to students. Surprisingly, 94% of the citation examples in the databases reviewed had errors. The average number of errors per example was 4.3. The citation help appears…

  6. Development of soybean gene expression database (SGED)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large volumes of microarray expression data is a challenge for analysis. To address this problem a web-based database, Soybean Expression Database (SGED) was built, using PERL/CGI, C and an ORACLE database management system. SGED contains three components. The Data Mining component serves as a repos...

  7. Geologic map and map database of the Palo Alto 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Brabb, E.E.; Jones, D.L.; Graymer, R.W.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (pamf.ps, pamf.pdf, pamf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  8. Enhancing the DNA Patent Database

    SciTech Connect

    Walters, LeRoy B.

    2008-02-18

    Final Report on Award No. DE-FG0201ER63171 Principal Investigator: LeRoy B. Walters February 18, 2008 This project successfully completed its goal of surveying and reporting on the DNA patenting and licensing policies at 30 major U.S. academic institutions. The report of survey results was published in the January 2006 issue of Nature Biotechnology under the title “The Licensing of DNA Patents by US Academic Institutions: An Empirical Survey.” Lori Pressman was the lead author on this feature article. A PDF reprint of the article will be submitted to our Program Officer under separate cover. The project team has continued to update the DNA Patent Database on a weekly basis since the conclusion of the project. The database can be accessed at dnapatents.georgetown.edu. This database provides a valuable research tool for academic researchers, policymakers, and citizens. A report entitled Reaping the Benefits of Genomic and Proteomic Research: Intellectual Property Rights, Innovation, and Public Health was published in 2006 by the Committee on Intellectual Property Rights in Genomic and Protein Research and Innovation, Board on Science, Technology, and Economic Policy at the National Academies. The report was edited by Stephen A. Merrill and Anne-Marie Mazza. This report employed and then adapted the methodology developed by our research project and quoted our findings at several points. (The full report can be viewed online at the following URL: http://www.nap.edu/openbook.php?record_id=11487&page=R1). My colleagues and I are grateful for the research support of the ELSI program at the U.S. Department of Energy.

  9. The PRO-ACT database

    PubMed Central

    Berry, James; Shui, Amy; Zach, Neta; Sherman, Alexander; Sinani, Ervin; Walker, Jason; Katsovskiy, Igor; Schoenfeld, David; Cudkowicz, Merit; Leitner, Melanie

    2014-01-01

    Objective: To pool data from completed amyotrophic lateral sclerosis (ALS) clinical trials and create an open-access resource that enables greater understanding of the phenotype and biology of ALS. Methods: Clinical trials data were pooled from 16 completed phase II/III ALS clinical trials and one observational study. Over 8 million de-identified longitudinally collected data points from over 8,600 individuals with ALS were standardized across trials and merged to create the Pooled Resource Open-Access ALS Clinical Trials (PRO-ACT) database. This database includes demographics, family histories, and longitudinal clinical and laboratory data. Mixed effects models were used to describe the rate of disease progression measured by the Revised ALS Functional Rating Scale (ALSFRS-R) and vital capacity (VC). Cox regression models were used to describe survival data. Implementing Bonferroni correction, the critical p value for 15 different tests was p = 0.003. Results: The ALSFRS-R rate of decline was 1.02 (±2.3) points per month and the VC rate of decline was 2.24% of predicted (±6.9) per month. Higher levels of uric acid at trial entry were predictive of a slower drop in ALSFRS-R (p = 0.01) and VC (p < 0.0001), and longer survival (p = 0.02). Higher levels of creatinine at baseline were predictive of a slower drop in ALSFRS-R (p = 0.01) and VC (p < 0.0001), and longer survival (p = 0.01). Finally, higher body mass index (BMI) at baseline was associated with longer survival (p < 0.0001). Conclusion: The PRO-ACT database is the largest publicly available repository of merged ALS clinical trials data. We report that baseline levels of creatinine and uric acid, as well as baseline BMI, are strong predictors of disease progression and survival. PMID:25298304

  10. Databases for video information sharing

    NASA Astrophysics Data System (ADS)

    Hjelsvold, Rune; Midtstraum, Roger

    1995-03-01

    This paper describes the VideoSTAR experimental database system that is being designed to support video applications in sharing and reusing video data and meta-data. VideoSTAR provides four different repositories: for media files, virtual documents, video structures, and video annotations/user indexes. It also provides a generic video data model relating data in the different repositories to each other, and it offers a powerful application interface. VideoSTAR concepts have been evaluated by developing a number of experimental video tools, such as a video player, a video annotator, a video authoring tool, a video structure and contents browser, and a video query tool.

  11. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Hasegawa, Tamae; Osanai, Masaaki

    This paper focuses on practical examples for using the CD-ROM version of Books In Print Plus, a database of book information produced by R. R. Bowker. The paper details the contents, installation and operation procedures, hardware requirements, search functions, search items, print commands, and special features of Books in Print Plus. The paper also includes an evaluation of this product based on four examples from actual office use. The paper concludes with a brief introduction to Ulrich’s Plus, a similar CD-ROM product for periodical information.

  12. Interconnecting heterogeneous database management systems

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  13. ANDES: NOAO's Observatory Database System

    NASA Astrophysics Data System (ADS)

    Gasson, D.; Bell, D.; Hartman, M.

    ANDES (the Advanced NOAO Database Expert System) is NOAO's new observing proposal system. Recent improvements include the phase out of legacy components, such as our previous Access-based effort called ALPS++. New work focuses on pre and post-TAC procedures such as importation, scheduling, collection of observing reports on the mountain, and automatic compilation of various statistics. The ultimate goal is to provide an environment which allows a comprehensive understanding of the collection, evaluation, scheduling, execution and post-execution of proposals and programs.

  14. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    SciTech Connect

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L.; Loftis, J.P.; Shipe, P.C.; Truett, L.F.

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  15. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  16. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. PMID:23749755

  17. Database choices in endocrine systematic reviews

    PubMed Central

    Vassar, Matt; Carr, Branden; Kash-Holley, Melissa; DeWitt, Elizabeth; Koller, Chelsea; Day, Joshua; Day, Kimberly; Herrmann, David; Holzmann, Matt

    2015-01-01

    Objective The choice of bibliographic database during the systematic review search process has been an ongoing conversation among information specialists. With newer information sources, such as Google Scholar and clinical trials registries, we were interested in which databases were utilized by information specialists and systematic review researchers. Method We retrieved 144 systematic reviews and meta-analyses from 4 clinical endocrinology journals and extracted all information sources used during the search processes. Results Findings indicate that traditional bibliographic databases are most often used, followed by regional databases, clinical trials registries, and gray literature databases. Conclusions This study informs information specialists about additional resources that may be considered during the search process. PMID:26512217

  18. Integrated database for rapid mass movements in Norway

    NASA Astrophysics Data System (ADS)

    Jaedicke, C.; Lied, K.; Kronholm, K.

    2009-03-01

    Rapid gravitational slope mass movements include all kinds of short term relocation of geological material, snow or ice. Traditionally, information about such events is collected separately in different databases covering selected geographical regions and types of movement. In Norway the terrain is susceptible to all types of rapid gravitational slope mass movements ranging from single rocks hitting roads and houses to large snow avalanches and rock slides where entire mountainsides collapse into fjords creating flood waves and endangering large areas. In addition, quick clay slides occur in desalinated marine sediments in South Eastern and Mid Norway. For the authorities and inhabitants of endangered areas, the type of threat is of minor importance and mitigation measures have to consider several types of rapid mass movements simultaneously. An integrated national database for all types of rapid mass movements built around individual events has been established. Only three data entries are mandatory: time, location and type of movement. The remaining optional parameters enable recording of detailed information about the terrain, materials involved and damages caused. Pictures, movies and other documentation can be uploaded into the database. A web-based graphical user interface has been developed allowing new events to be entered, as well as editing and querying for all events. An integration of the database into a GIS system is currently under development. Datasets from various national sources like the road authorities and the Geological Survey of Norway were imported into the database. Today, the database contains 33 000 rapid mass movement events from the last five hundred years covering the entire country. A first analysis of the data shows that the most frequent type of recorded rapid mass movement is rock slides and snow avalanches followed by debris slides in third place. Most events are recorded in the steep fjord terrain of the Norwegian west coast, but major events are recorded all over the country. Snow avalanches account for most fatalities, while large rock slides causing flood waves and huge quick clay slides are the most damaging individual events in terms of damage to infrastructure and property and for causing multiple fatalities. The quality of the data is strongly influenced by the personal engagement of local observers and varying observation routines. This database is a unique source for statistical analysis including, risk analysis and the relation between rapid mass movements and climate. The database of rapid mass movement events will also facilitate validation of national hazard and risk maps.

  19. Data mining in forensic image databases

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien

    2002-07-01

    Forensic Image Databases appear in a wide variety. The oldest computer database is with fingerprints. Other examples of databases are shoeprints, handwriting, cartridge cases, toolmarks drugs tablets and faces. In these databases searches are conducted on shape, color and other forensic features. There exist a wide variety of methods for searching in images in these databases. The result will be a list of candidates that should be compared manually. The challenge in forensic science is to combine the information acquired. The combination of the shape of a partial shoe print with information on a cartridge case can result in stronger evidence. It is expected that searching in the combination of these databases with other databases (e.g. network traffic information) more crimes will be solved. Searching in image databases is still difficult, as we can see in databases of faces. Due to lighting conditions and altering of the face by aging, it is nearly impossible to find a right face from a database of one million faces in top position by a image searching method, without using other information. The methods for data mining in images in databases (e.g. MPEG-7 framework) are discussed, and the expectations of future developments are presented in this study.

  20. Development of a national, dynamic reservoir-sedimentation database

    USGS Publications Warehouse

    Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.

    2010-01-01

    The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in November 2009 to modernize the RESSED database architecture; provide public online input capability; and produce online reports. The ultimate goal of the Subcommittee on Sedimentation is to build a comprehensive, quality-assured database describing capacity changes over time for the largest suite of the Nation’s reservoirs.