Science.gov

Sample records for area database rogad

  1. Acquisition of CD-ROM Databases for Local Area Networks.

    ERIC Educational Resources Information Center

    Davis, Trisha L.

    1993-01-01

    Discusses the acquisition of CD-ROM products for local area networks based on experiences at the Ohio State University libraries. Topics addressed include the historical development of CD-ROM acquisitions; database selection, including pricing and subscription options; the ordering process; and network licensing issues. (six references) (LRW)

  2. Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints

    ERIC Educational Resources Information Center

    Philip, George C.

    2007-01-01

    This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…

  3. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  4. Geologic map database of the El Mirage Lake area, San Bernardino and Los Angeles Counties, California

    USGS Publications Warehouse

    Miller, David M.; Bedford, David R.

    2000-01-01

    This geologic map database for the El Mirage Lake area describes geologic materials for the dry lake, parts of the adjacent Shadow Mountains and Adobe Mountain, and much of the piedmont extending south from the lake upward toward the San Gabriel Mountains. This area lies within the western Mojave Desert of San Bernardino and Los Angeles Counties, southeastern California. The area is traversed by a few paved highways that service the community of El Mirage, and by numerous dirt roads that lead to outlying properties. An off-highway vehicle area established by the Bureau of Land Management encompasses the dry lake and much of the land north and east of the lake. The physiography of the area consists of the dry lake, flanking mud and sand flats and alluvial piedmonts, and a few sharp craggy mountains. This digital geologic map database, intended for use at 1:24,000-scale, describes and portrays the rock units and surficial deposits of the El Mirage Lake area. The map database was prepared to aid in a water-resource assessment of the area by providing surface geologic information with which deepergroundwater-bearing units may be understood. The area mapped covers the Shadow Mountains SE and parts of the Shadow Mountains, Adobe Mountain, and El Mirage 7.5-minute quadrangles. The map includes detailed geology of surface and bedrock deposits, which represent a significant update from previous bedrock geologic maps by Dibblee (1960) and Troxel and Gunderson (1970), and the surficial geologic map of Ponti and Burke (1980); it incorporates a fringe of the detailed bedrock mapping in the Shadow Mountains by Martin (1992). The map data were assembled as a digital database using ARC/INFO to enable wider applications than traditional paper-product geologic maps and to provide for efficient meshing with other digital data bases prepared by the U.S. Geological Survey's Southern California Areal Mapping Project.

  5. Geothermal resource areas database for monitoring the progress of development in the United States

    NASA Astrophysics Data System (ADS)

    Lawrence, J. D.; Lepman, S. R.; Leung, K. N.; Phillips, S. L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described as well as the structure of the database.

  6. Geothermal resource areas database for monitoring the progress of development in the United States

    SciTech Connect

    Lawrence, J.D.; Lepman, S.R.; Leung, K.; Phillips, S.L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described here. Appendices describe the structure of the database in detail.

  7. Analysis on the flood vulnerability in the Seoul and Busan metropolitan area, Korea using spatial database

    NASA Astrophysics Data System (ADS)

    Lee, Mung-Jin

    2015-04-01

    In the future, temperature rises and precipitation increases are expected from climate change due to global warming. Concentrated heavy rain, typhoons, flooding, and other weather phenomena bring hydrologic variations. In this study, the flood susceptibility of the Seoul and Busan metropolitan area was analyzed and validated using a GIS based on a frequency ratio model and a logistic regression model with training and validation datasets of the flooded area. The flooded area in 2010 was used to train the model, and the flooded area in 2011 was used to validate the model. Using data is that topographic, geological, and soil data from the study areas were collected, processed, and digitized for use in a GIS. Maps relevant to the specific capacity were assembled in a spatial database. Then, flood susceptibility maps were created. Finally, the flood susceptibility maps were validated using the flooded area in 2011, which was not used for training. To represent the flood susceptible areas, this study used the probability-frequency ratio. The frequency ratio is the probability of occurrence of a certain attribute. Logistic regression allows for investigation of multivariate regression relations between one dependent and several independent variables. Logistic regression has a limit in that the calculation process cannot be traced because it repeats calculations to find the optimized regression equation for determining the possibility that the dependent variable will occur. In case of Seoul, The frequency ratio and logistic regression model results showed 79.61% and 79.05% accuracy. And the case of Busan, logistic regression model results showed 82.30%. This information and the maps generated from it could be applied to flood prevention and management. In addition, the susceptibility map provides meaningful information for decision-makers regarding priority areas for implementing flood mitigation policies.

  8. Database of well and areal data, South San Francisco Bay and Peninsula area, California

    USGS Publications Warehouse

    Leighton, D.A.; Fio, J.L.; Metzger, L.F.

    1995-01-01

    A database was developed to organize and manage data compiled for a regional assessment of geohydrologic and water-quality conditions in the south San Francisco Bay and Peninsula area in California. Available data provided by local, State, and Federal agencies and private consultants was utilized in the assessment. The database consists of geographicinformation system data layers and related tables and American Standard Code for Information Interchange files. Documentation of the database is necessary to avoid misinterpretation of the data and to make users aware of potential errors and limitations. Most of the data compiled were collected from wells and boreholes (collectively referred to as wells in this report). This point-specific data, including construction, water-level, waterquality, pumping test, and lithologic data, are contained in tables and files that are related to a geographic information system data layer that contains the locations of the wells. There are 1,014 wells in the data layer and the related tables contain 35,845 water-level measurements (from 293 of the wells) and 9,292 water-quality samples (from 394 of the wells). Calculation of hydraulic heads and gradients from the water levels can be affected adversely by errors in the determination of the altitude of land surface at the well. Cation and anion balance computations performed on 396 of the water-quality samples indicate high cation and anion balance errors for 51 (13 percent) of the samples. Well drillers' reports were interpreted for 762 of the wells, and digital representations of the lithology of the formations are contained in files following the American Standard Code for Information Interchange. The usefulness of drillers' descriptions of the formation lithology is affected by the detail and thoroughness of the drillers' descriptions, as well as the knowledge, experience, and vocabulary of the individual who described the drill cuttings. Additional data layers were created that

  9. Monitoring of equine health in Denmark: the importance, purpose, research areas and content of a future database.

    PubMed

    Hartig, Wendy; Houe, Hans; Andersen, Pia Haubro

    2013-04-01

    The plentiful data on Danish horses are currently neither organized nor easily accessible, impeding register-based epidemiological studies on Danish horses. A common database could be beneficial. In principle, databases can contain a wealth of information, but no single database can serve every purpose. Hence the establishment of a Danish equine health database should be preceded by careful consideration of its purpose and content, and stakeholder attitudes should be investigated. The objectives of the present study were to identify stakeholder attitudes to the importance, purpose, research areas and content of a health database for horses in Denmark. A cross-sectional study was conducted with 13 horse-related stakeholder groups in Denmark. The groups surveyed included equine veterinarians, researchers, veterinary students, representatives from animal welfare organizations, horse owners, trainers, farriers, authority representatives, ordinary citizens, and representatives from laboratories, insurance companies, medical equipment companies and pharmaceutical companies. Supplementary attitudes were inferred from qualitative responses. The overall response rate for all stakeholder groups was 45%. Stakeholder group-specific response rates were 27-80%. Sixty-eight percent of questionnaire respondents thought a national equine health database was important. Most respondents wanted the database to contribute to improved horse health and welfare, to be used for research into durability and disease heritability, and to serve as a basis for health declarations for individual horses. The generally preferred purpose of the database was thus that it should focus on horse health and welfare rather than on performance or food safety, and that it should be able to function both at a population and an individual horse level. In conclusion, there is a positive attitude to the establishment of a health database for Danish horses. These results could enrich further reflection on the

  10. Spring Database for the Basin and Range Carbonate-Rock Aquifer System, White Pine County, Nevada, and Adjacent Areas in Nevada and Utah

    USGS Publications Warehouse

    Pavelko, Michael T.

    2007-01-01

    A database containing nearly 3,400 springs was developed for the Basin and Range carbonate-rock aquifer system study area in White Pine County, Nevada, and adjacent areas in Nevada and Utah. The spring database provides a foundation for field verification of springs in the study area. Attributes in the database include location, geographic and general geologic settings, and available discharge and temperature data for each spring.

  11. Design of the typical altered mineral spectral feature database system on the area of oil and gas migration

    NASA Astrophysics Data System (ADS)

    Liu, Xing; Chen, Xiaomei; Li, Qianqian; Ni, Guoqiang

    2011-11-01

    According to the abnormal spectrum produced by Oil micro-leakage in China's Gobi and sparse vegetated region, six types of spectrum data, which were used as the reference spectrum, were established for the database of exploring oil and gas. The USGS and JPL spectrum data, the spectrum data of alteration mineral in the gas field, the carbonation and clay mineral spectrum data and the hyperspectral spectrum data were contained in the database. The spectral characteristic information was extracted and integrated into the database. A series of interfaces were provided to users to allow the users to add their own spectrum features of the oil and gas areas, which will enhance the scalability of the feature database. The typical altered mineral spectrums produced by oil micro-leakage in China's Gobi and sparse vegetated regions were comprehensively covered in the database, which will enrich China's spectral library and is with the guidance of the oil and gas exploration by aerospace and aviation hyperspectral remote sensing.

  12. Database of groundwater levels and hydrograph descriptions for the Nevada Test Site area, Nye County, Nevada

    USGS Publications Warehouse

    Elliott, Peggy E.; Fenelon, Joseph M.

    2010-01-01

    Water levels in the database were quality assured and analyzed. Multiple conditions were assigned to each water‑level measurement to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed to each water-level measurement.

  13. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness.

    PubMed

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D; Hockings, Marc

    2015-11-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133

  14. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness.

    PubMed

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D; Hockings, Marc

    2015-11-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible.

  15. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness

    PubMed Central

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D.; Hockings, Marc

    2015-01-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133

  16. The construction and periodicity analysis of natural disaster database of Alxa area based on Chinese local records

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Mingzhong, Tian; Hengli, Wang

    2010-05-01

    Chinese hand-written local records were originated from the first century. Generally, these local records include geography, evolution, customs, education, products, people, historical sites, as well as writings of an area. Through such endeavors, the information of the natural materials of China nearly has had no "dark ages" in the evolution of its 5000-year old civilization. A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia, the second largest desert in China, is used here for the construction of a 500-year high resolution database. The database is divided into subsets according to the types of natural-disasters like sand-dust storm, drought events, cold wave, etc. Through applying trend, correlation, wavelet, and spectral analysis on these data, we can estimate the statistically periodicity of different natural-disasters, detect and quantify similarities and patterns of the periodicities of these records, and finally take these results in aggregate to find a strong and coherent cyclicity through the last 500 years which serves as the driving mechanism of these geological hazards. Based on the periodicity obtained from the above analysis, the paper discusses the probability of forecasting natural-disasters and the suitable measures to reduce disaster losses through history records. Keyword: Chinese local records; Alxa; natural disasters; database; periodicity analysis

  17. Planting the SEED: Towards a Spatial Economic Ecological Database for a shared understanding of the Dutch Wadden area

    NASA Astrophysics Data System (ADS)

    Daams, Michiel N.; Sijtsma, Frans J.

    2013-09-01

    In this paper we address the characteristics of a publicly accessible Spatial Economic Ecological Database (SEED) and its ability to support a shared understanding among planners and experts of the economy and ecology of the Dutch Wadden area. Theoretical building blocks for a Wadden SEED are discussed. Our SEED contains a comprehensive set of stakeholder validated spatially explicit data on key economic and ecological indicators. These data extend over various spatial scales. Spatial issues relevant to the specification of a Wadden-SEED and its data needs are explored in this paper and illustrated using empirical data for the Dutch Wadden area. The purpose of the SEED is to integrate basic economic and ecologic information in order to support the resolution of specific (policy) questions and to facilitate connections between project level and strategic level in the spatial planning process. Although modest in its ambitions, we will argue that a Wadden SEED can serve as a valuable element in the much debated science-policy interface. A Wadden SEED is valuable since it is a consensus-based common knowledge base on the economy and ecology of an area rife with ecological-economic conflict, including conflict in which scientific information is often challenged and disputed.

  18. Geologic Map and Map Database of the Oakland Metropolitan Area, Alameda, Contra Costa, and San Francisco Counties, California

    USGS Publications Warehouse

    Graymer, R.W.

    2000-01-01

    Introduction This report contains a new geologic map at 1:50,000 scale, derived from a set of geologic map databases containing information at a resolution associated with 1:24,000 scale, and a new description of geologic map units and structural relationships in the mapped area. The map database represents the integration of previously published reports and new geologic mapping and field checking by the author (see Sources of Data index map on the map sheet or the Arc-Info coverage pi-so and the textfile pi-so.txt). The descriptive text (below) contains new ideas about the Hayward fault and other faults in the East Bay fault system, as well as new ideas about the geologic units and their relations. These new data are released in digital form in conjunction with the Federal Emergency Management Agency Project Impact in Oakland. The goal of Project Impact is to use geologic information in land-use and emergency services planning to reduce the losses occurring during earthquakes, landslides, and other hazardous geologic events. The USGS, California Division of Mines and Geology, FEMA, California Office of Emergency Services, and City of Oakland participated in the cooperative project. The geologic data in this report were provided in pre-release form to other Project Impact scientists, and served as one of the basic data layers for the analysis of hazard related to earthquake shaking, liquifaction, earthquake induced landsliding, and rainfall induced landsliding. The publication of these data provides an opportunity for regional planners, local, state, and federal agencies, teachers, consultants, and others outside Project Impact who are interested in geologic data to have the new data long before a traditional paper map could be published. Because the database contains information about both the bedrock and surficial deposits, it has practical applications in the study of groundwater and engineering of hillside materials, as well as the study of geologic hazards and

  19. Cortical thinning in cognitively normal elderly cohort of 60 to 89 year old from AIBL database and vulnerable brain areas

    NASA Astrophysics Data System (ADS)

    Lin, Zhongmin S.; Avinash, Gopal; Yan, Litao; McMillan, Kathryn

    2014-03-01

    Age-related cortical thinning has been studied by many researchers using quantitative MR images for the past three decades and vastly differing results have been reported. Although results have shown age-related cortical thickening in elderly cohort statistically in some brain regions under certain conditions, cortical thinning in elderly cohort requires further systematic investigation. This paper leverages our previously reported brain surface intensity model (BSIM)1 based technique to measure cortical thickness to study cortical changes due to normal aging. We measured cortical thickness of cognitively normal persons from 60 to 89 years old using Australian Imaging Biomarkers and Lifestyle Study (AIBL) data. MRI brains of 56 healthy people including 29 women and 27 men were selected. We measured average cortical thickness of each individual in eight brain regions: parietal, frontal, temporal, occipital, visual, sensory motor, medial frontal and medial parietal. Unlike the previous published studies, our results showed consistent age-related thinning of cerebral cortex in all brain regions. The parietal, medial frontal and medial parietal showed fastest thinning rates of 0.14, 0.12 and 0.10 mm/decade respectively while the visual region showed the slowest thinning rate of 0.05 mm/decade. In sensorimotor and parietal areas, women showed higher thinning (0.09 and 0.16 mm/decade) than men while in all other regions men showed higher thinning than women. We also created high resolution cortical thinning rate maps of the cohort and compared them to typical patterns of PET metabolic reduction of moderate AD and frontotemporal dementia (FTD). The results seemed to indicate vulnerable areas of cortical deterioration that may lead to brain dementia. These results validate our cortical thickness measurement technique by demonstrating the consistency of the cortical thinning and prediction of cortical deterioration trend with AIBL database.

  20. Digital database architecture and delineation methodology for deriving drainage basins, and a comparison of digitally and non-digitally derived numeric drainage areas

    USGS Publications Warehouse

    Dupree, Jean A.; Crowfoot, Richard M.

    2012-01-01

    The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)

  1. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    SciTech Connect

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W.; Senkpeil, Ryan R.; Tlatov, Andrey G.; Nagovitsyn, Yury A.; Pevtsov, Alexei A.; Chapman, Gary A.; Cookson, Angela M.; Yeates, Anthony R.; Watson, Fraser T.; Balmaceda, Laura A.; DeLuca, Edward E.; Martens, Petrus C. H.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  2. Comparison of ASTER Global Emissivity Database (ASTER-GED) With In-Situ Measurement In Italian Vulcanic Areas

    NASA Astrophysics Data System (ADS)

    Silvestri, M.; Musacchio, M.; Buongiorno, M. F.; Amici, S.; Piscini, A.

    2015-12-01

    LP DAAC released the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Database (GED) datasets on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivities derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. In this work we compare ground measurements of emissivity acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the ASTER emissivity map extract from ASTER-GED and the emissivity obtained by using single ASTER data. Through this analysis we want to investigate differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. In-situ emissivity measurements have been collected during dedicated fields campaign on Mt. Etna vulcano and Solfatara of Pozzuoli. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  3. Analytical results, database management and quality assurance for analysis of soil and groundwater samples collected by cone penetrometer from the F and H Area seepage basins

    SciTech Connect

    Boltz, D.R.; Johnson, W.H.; Serkiz, S.M.

    1994-10-01

    The Quantification of Soil Source Terms and Determination of the Geochemistry Controlling Distribution Coefficients (K{sub d} values) of Contaminants at the F- and H-Area Seepage Basins (FHSB) study was designed to generate site-specific contaminant transport factors for contaminated groundwater downgradient of the Basins. The experimental approach employed in this study was to collect soil and its associated porewater from contaminated areas downgradient of the FHSB. Samples were collected over a wide range of geochemical conditions (e.g., pH, conductivity, and contaminant concentration) and were used to describe the partitioning of contaminants between the aqueous phase and soil surfaces at the site. The partitioning behavior may be used to develop site-specific transport factors. This report summarizes the analytical procedures and results for both soil and porewater samples collected as part of this study and the database management of these data.

  4. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  5. Are we safe? A tool to improve the knowledge of the risk areas: high-resolution floods database (MEDIFLOOD) for Spanish Mediterranean coast (1960 -2014)

    NASA Astrophysics Data System (ADS)

    Gil-Guirado, Salvador; Perez-Morales, Alfredo; Lopez-Martinez, Francisco; Barriendos-Vallve, Mariano

    2016-04-01

    The Mediterranean coast of the Iberian Peninsula concentrates an important part of the population and economic activities in Spain. Intensive agriculture, industry in the major urban centers, trade and tourism make this region the main center of economic dynamism and one of the highest rates of population and economic growth of southern Europe. This process accelerated after Franco regime started to be more open to the outside in the early sixties of the last century. The main responsible factor for this process is the climate because of warmer temperatures and a large number of sunny days, which has become in the economic slogan of the area. However, this growth process has happened without proper planning to reduce the impact of other climatic feature of the area, floods. Floods are the natural hazard that generates greater impacts in the area.One of the factors that facilitate the lack of strategic planning is the absence of a correct chronology of flood episodes. In this situation, land use plans, are based on inadequate chronologies that do not report the real risk of the population of this area. To reduce this deficit and contribute to a more efficient zoning of the Mediterranean coast according to their floods risk, we have prepared a high-resolution floods database (MEDIFLOOD) for all the municipalities of the Spanish Mediterranean coast since 1960 until 2013. The methodology consists on exploring the newspaper archives of all newspapers with a presence in the area. The searches have been made by typing the name of each of the 180 municipalities of the Spanish coast followed by 5 key terms. Each identified flood has been classified by dates and according to their level of intensity and type of damage. Additionally, we have consulted the specific bibliography to rule out any data gaps. The results are surprising and worrying. We have identified more than 3,600 cases where a municipality has been affected by floods. These cases are grouped into more than 700

  6. Corpus callosum area and brain volume in autism spectrum disorder: quantitative analysis of structural MRI from the ABIDE database.

    PubMed

    Kucharsky Hiess, R; Alter, R; Sojoudi, S; Ardekani, B A; Kuzniecky, R; Pardoe, H R

    2015-10-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial region. No difference in the corpus callosum area was found between ASD participants and healthy controls (ASD 598.53 ± 109 mm(2); control 596.82 ± 102 mm(2); p = 0.76). The ASD participants had increased intracranial volume (ASD 1,508,596 ± 170,505 mm(3); control 1,482,732 ± 150,873.5 mm(3); p = 0.042). No evidence was found for overall ASD differences in the corpus callosum subregions.

  7. Map and map database of susceptibility to slope failure by sliding and earthflow in the Oakland area, California

    USGS Publications Warehouse

    Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven

    2001-01-01

    Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area

  8. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  9. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda.

    PubMed

    Yanagisawa, Ayumi; Sudo, Noriko; Amitani, Yukiko; Caballero, Yuko; Sekiyama, Makiko; Mukamugema, Christine; Matsuoka, Takuya; Imanishi, Hiroaki; Sasaki, Takayo; Matsuda, Hirotaka

    2016-01-01

    This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ) for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs) from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from -0.09 (vitamin A) to 0.58 (protein) and from -0.19 (vitamin A) to 0.68 (iron), respectively. About 50%-60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes.

  10. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda.

    PubMed

    Yanagisawa, Ayumi; Sudo, Noriko; Amitani, Yukiko; Caballero, Yuko; Sekiyama, Makiko; Mukamugema, Christine; Matsuoka, Takuya; Imanishi, Hiroaki; Sasaki, Takayo; Matsuda, Hirotaka

    2016-01-01

    This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ) for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs) from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from -0.09 (vitamin A) to 0.58 (protein) and from -0.19 (vitamin A) to 0.68 (iron), respectively. About 50%-60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes. PMID:27429558

  11. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda

    PubMed Central

    Yanagisawa, Ayumi; Sudo, Noriko; Amitani, Yukiko; Caballero, Yuko; Sekiyama, Makiko; Mukamugema, Christine; Matsuoka, Takuya; Imanishi, Hiroaki; Sasaki, Takayo; Matsuda, Hirotaka

    2016-01-01

    This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ) for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs) from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from −0.09 (vitamin A) to 0.58 (protein) and from −0.19 (vitamin A) to 0.68 (iron), respectively. About 50%–60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes. PMID:27429558

  12. A spatial database of bedding attitudes to accompany Geologic Map of Boulder-Fort Collins-Greeley Area, Colorado

    USGS Publications Warehouse

    Colton, Roger B.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude data displayed over the geographic extent of rock stratigraphic units (formations) as compiled by Colton in 1976 (U.S.Geological Survey Map I-855-G) under the Front Range Urban Corridor Geology Program. Colton used his own mapping and published geologic maps having varied map unit schemes to compile one map with a uniform classification of geologic units. The resulting published color paper map was intended for planning for use of land in the Front Range Urban Corridor. In 1997-1999, under the USGS Front Range Infrastructure Resources Project, Colton's map was digitized to provide data at 1:100,000 scale to address urban growth issues(see cross-reference). In general, the west part of the map shows a variety of Precambrian igneous and metamorphic rocks, major faults and brecciated zones along an eastern strip (5-20 km wide) of the Front Range. The eastern and central part of the map (Colorado Piedmont) depicts a mantle of Quaternary unconsolidated deposits and interspersed Cretaceous or Tertiary-Cretaceous sedimentary rock outcrops. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone and shale formations (and sparse limestone) form hogbacks, intervening valleys, and in range-front folds, anticlines, and fault blocks. Localized dikes and sills of Tertiary rhyodacite and basalt intrude rocks near the range front, mostly in the Boulder area.

  13. A spatial database of bedding attitudes to accompany Geologic map of the greater Denver area, Front Range Urban Corridor, Colorado

    USGS Publications Warehouse

    Trimble, Donald E.; Machette, Michael N.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude symbols display over the geographic extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette 1973-1977 and published in 1979 (U.S. Geological Survey Map I-856-H) under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999), was digitized under the USGS Front Range Infrastructure Resources Project (see cross-reference). In general, the mountainous areas in the west part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.

  14. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  15. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska.

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Kofoed, K. B.; Copenhaver, W.; Laney, C. M.; Gaylord, A. G.; Collins, J. A.; Tweedie, C. E.

    2014-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Recent advances include the addition of more than 2000 new research sites, provision of differential global position system (dGPS) and Unmanned Aerial Vehicle (UAV) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal to better make use of science in local decision making, deployment and near real time connectivity to a wireless micrometeorological sensor network, links to Barrow area datasets housed at national data archives and substantial upgrades to the BAID website and web mapping applications.

  16. BAID: The Barrow Area Information Database - An Interactive Web Mapping Portal and Cyberinfrastructure Showcasing Scientific Activities in the Vicinity of Barrow, Arctic Alaska.

    NASA Astrophysics Data System (ADS)

    Escarzaga, S. M.; Cody, R. P.; Kassin, A.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Mazza Ramsay, F. D.; Vargas, S. A., Jr.; Tarin, G.; Laney, C. M.; Villarreal, S.; Aiken, Q.; Collins, J. A.; Green, E.; Nelson, L.; Tweedie, C. E.

    2015-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Additionally, data are described with metadata that meet Federal Geographic Data Committee standards. Recent advances include the addition of more than 2000 new research sites, the addition of a query builder user interface allowing rich and complex queries, and provision of differential global position system (dGPS) and high-resolution aerial imagery support to visiting scientists. Recent field surveys include over 80 miles of coastline to document rates of erosion and the collection of high-resolution sonar data for bathymetric mapping of Elson Lagoon and near shore region of the Chukchi Sea. A network of five climate stations has been deployed across the peninsula to serve as a wireless net for the research community and to deliver near real time climatic data to the user community. Local GIS personal have also been trained to better make use of scientific data for local decision making. Links to Barrow area datasets are housed at national data archives and substantial upgrades have

  17. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A.; Brown, J.; Tweedie, C. E.

    2012-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. The Barrow Area Information Database (BAID, www.baidims.org) is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 9,600 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. BAID has been used to: Optimize research site choice; Reduce duplication of science effort; Discover complementary and potentially detrimental research activities in an area of scientific interest; Re-establish historical research sites for resampling efforts assessing change in ecosystem structure and function over time; Exchange knowledge across disciplines and generations; Facilitate communication between western science and traditional ecological knowledge; Provide local residents access to science data that facilitates adaptation to arctic change; (and) Educate the next generation of environmental and computer scientists. This poster describes key activities that will be undertaken over the next three years to provide BAID users with novel software tools to interact with a current and diverse selection of information and data about the Barrow area. Key activities include: 1. Collecting data on research

  18. Database Manager

    ERIC Educational Resources Information Center

    Martin, Andrew

    2010-01-01

    It is normal practice today for organizations to store large quantities of records of related information as computer-based files or databases. Purposeful information is retrieved by performing queries on the data sets. The purpose of DATABASE MANAGER is to communicate to students the method by which the computer performs these queries. This…

  19. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  20. Analysis of expressed sequence tags from Actinidia: applications of a cross species EST database for gene discovery in the areas of flavor, health, color and ripening

    PubMed Central

    Crowhurst, Ross N; Gleave, Andrew P; MacRae, Elspeth A; Ampomah-Dwamena, Charles; Atkinson, Ross G; Beuning, Lesley L; Bulley, Sean M; Chagne, David; Marsh, Ken B; Matich, Adam J; Montefiori, Mirco; Newcomb, Richard D; Schaffer, Robert J; Usadel, Björn; Allan, Andrew C; Boldingh, Helen L; Bowen, Judith H; Davy, Marcus W; Eckloff, Rheinhart; Ferguson, A Ross; Fraser, Lena G; Gera, Emma; Hellens, Roger P; Janssen, Bart J; Klages, Karin; Lo, Kim R; MacDiarmid, Robin M; Nain, Bhawana; McNeilage, Mark A; Rassam, Maysoon; Richardson, Annette C; Rikkerink, Erik HA; Ross, Gavin S; Schröder, Roswitha; Snowden, Kimberley C; Souleyre, Edwige JF; Templeton, Matt D; Walton, Eric F; Wang, Daisy; Wang, Mindy Y; Wang, Yanming Y; Wood, Marion; Wu, Rongmei; Yauk, Yar-Khing; Laing, William A

    2008-01-01

    Background Kiwifruit (Actinidia spp.) are a relatively new, but economically important crop grown in many different parts of the world. Commercial success is driven by the development of new cultivars with novel consumer traits including flavor, appearance, healthful components and convenience. To increase our understanding of the genetic diversity and gene-based control of these key traits in Actinidia, we have produced a collection of 132,577 expressed sequence tags (ESTs). Results The ESTs were derived mainly from four Actinidia species (A. chinensis, A. deliciosa, A. arguta and A. eriantha) and fell into 41,858 non redundant clusters (18,070 tentative consensus sequences and 23,788 EST singletons). Analysis of flavor and fragrance-related gene families (acyltransferases and carboxylesterases) and pathways (terpenoid biosynthesis) is presented in comparison with a chemical analysis of the compounds present in Actinidia including esters, acids, alcohols and terpenes. ESTs are identified for most genes in color pathways controlling chlorophyll degradation and carotenoid biosynthesis. In the health area, data are presented on the ESTs involved in ascorbic acid and quinic acid biosynthesis showing not only that genes for many of the steps in these pathways are represented in the database, but that genes encoding some critical steps are absent. In the convenience area, genes related to different stages of fruit softening are identified. Conclusion This large EST resource will allow researchers to undertake the tremendous challenge of understanding the molecular basis of genetic diversity in the Actinidia genus as well as provide an EST resource for comparative fruit genomics. The various bioinformatics analyses we have undertaken demonstrates the extent of coverage of ESTs for genes encoding different biochemical pathways in Actinidia. PMID:18655731

  1. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  2. A first database for landslide studies in densely urbanized areas of the intertropical zone: Abidjan, Côte d'Ivoire

    NASA Astrophysics Data System (ADS)

    Gnagne, Frédéric; Demoulin, Alain; Biemi, Jean; Dewitte, Olivier; Kouadio, Hélène; Lasm, Théophile

    2016-04-01

    Landslides, a natural phenomenon often enhanced by human misuse of the land, may be a considerable threat to urban communities and severely affect urban landscapes, taking its death toll, impacting livelihood, and causing economic and social damages. Our first results show that, in Abidjan city, Ivory Coast, landslides caused more than fifty casualties in the towns of Attecoube and Abobo during the last twenty years. Although informal landslide reports exist, map information and geomorphological characterization are at best restricted, or often simply lacking. Here, we aim at constituting a comprehensive landslide database (localization, nature and morphometry of the slides, slope material, human interference, elements at risk) in the town of Attecoube as case study in order to support a first analysis of landslide susceptibility in the area. The field inventory conducted so far contains 56 landslides. These are mainly translational debris and soil slides, plus a few deeper rotational soil slides. Affecting 10-25°-steep, less than 10-m-high slopes in Quaternary sand and mud, they are most often associated with wild constructions either loading the top or cutting the toe of the slopes. They were located by GPS and tentatively dated through inquiries during the survey. While 12 landslides were accurately dated that way from the main rain seasons of 2013 to 2015, newspapers analysis and municipal archive consultation allowed us to assign a part of the rest to the last decade. Field inquiries were also used to collect information about fatalities and the local conditions of landsliding. This first landslide inventory in Attecoube provides clues about the main potential controls on landsliding, natural and anthropogenic, and will help define adequately anthropogenic variables to be used in the susceptibility modelling.

  3. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  4. GIS for the Gulf: A reference database for hurricane-affected areas: Chapter 4C in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Greenlee, Dave

    2007-01-01

    A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.

  5. Drinking Water Treatability Database (Database)

    EPA Science Inventory

    The drinking Water Treatability Database (TDB) will provide data taken from the literature on the control of contaminants in drinking water, and will be housed on an interactive, publicly-available USEPA web site. It can be used for identifying effective treatment processes, rec...

  6. Mathematical Notation in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Pasterczyk, Catherine E.

    1990-01-01

    Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…

  7. Biological Databases for Behavioral Neurobiology

    PubMed Central

    Baker, Erich J.

    2014-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the various needs of behavioral neuroscience analysis and interpretation. The focus is on the classes of databases, data curation, and advanced applications in bioinformatics using examples largely drawn from research efforts in behavioral neuroscience. PMID:23195119

  8. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  9. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  10. Database Marketplace 2002: The Database Universe.

    ERIC Educational Resources Information Center

    Tenopir, Carol; Baker, Gayle; Robinson, William

    2002-01-01

    Reviews the database industry over the past year, including new companies and services, company closures, popular database formats, popular access methods, and changes in existing products and services. Lists 33 firms and their database services; 33 firms and their database products; and 61 company profiles. (LRW)

  11. The CEBAF Element Database

    SciTech Connect

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  12. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  13. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery. PMID:19727614

  14. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  15. Out-of-School Time Programs in Rural Areas. Highlights from the Out-of-School Time Database. Research Update, No. 6

    ERIC Educational Resources Information Center

    Harris, Erin; Malone, Helen; Sunnanon, Tai

    2011-01-01

    Out-of-school time (OST) programming can be a crucial asset to families in rural areas where resources to support children's learning and development are often insufficient to meet the community's needs. OST programs that offer youth in rural communities a safe and supportive adult-supervised environment--along with various growth-enhancing…

  16. Preliminary integrated geologic map databases for the United States: Digital data for the reconnaissance bedrock geologic map for the northern Alaska peninsula area, southwest Alaska

    USGS Publications Warehouse

    ,

    2006-01-01

    he growth in the use of Geographic Information Systems (GIS) has highlighted the need for digital geologic maps that have been attributed with information about age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This report is part of a series of integrated geologic map databases that cover the entire United States. Three national-scale geologic maps that portray most or all of the United States already exist; for the conterminous U.S., King and Beikman (1974a,b) compiled a map at a scale of 1:2,500,000, Beikman (1980) compiled a map for Alaska at 1:2,500,000 scale, and for the entire U.S., Reed and others (2005a,b) compiled a map at a scale of 1:5,000,000. A digital version of the King and Beikman map was published by Schruben and others (1994). Reed and Bush (2004) produced a digital version of the Reed and others (2005a) map for the conterminous U.S. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. The digital geologic maps presented here are in a standardized format as ARC/INFO export files and as ArcView shape files. Data tables that relate the map units to detailed lithologic and age information accompany these GIS files. The map is delivered as a set 1:250,000-scale quadrangle files. To the best of our ability, these quadrangle files are edge-matched with respect to geology. When the maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps.

  17. Overlap in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Hood, William W.; Wilson, Concepcion S.

    2003-01-01

    Examines the topic of Fuzzy Set Theory to determine the overlap of coverage in bibliographic databases. Highlights include examples of comparisons of database coverage; frequency distribution of the degree of overlap; records with maximum overlap; records unique to one database; intra-database duplicates; and overlap in the top ten databases.…

  18. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  19. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  20. Global Cropland Area Database (GCAD) derived from Remote Sensing in Support of Food Security in the Twenty-first Century: Current Achievements and Future Possibilities

    USGS Publications Warehouse

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Giri, Chandra; Milesi, Cristina; Ozdogan, Mutlu; Congalton, Russ; Tilton, James; Sankey, Temuulen Tsagaan; Massey, Richard; Phalke, Aparna; Yadav, Kamini

    2015-01-01

    The precise estimation of the global agricultural cropland- extents, areas, geographic locations, crop types, cropping intensities, and their watering methods (irrigated or rainfed; type of irrigation) provides a critical scientific basis for the development of water and food security policies (Thenkabail et al., 2012, 2011, 2010). By year 2100, the global human population is expected to grow to 10.4 billion under median fertility variants or higher under constant or higher fertility variants (Table 1) with over three quarters living in developing countries, in regions that already lack the capacity to produce enough food. With current agricultural practices, the increased demand for food and nutrition would require in about 2 billion hectares of additional cropland, about twice the equivalent to the land area of the United States, and lead to significant increases in greenhouse gas productions (Tillman et al., 2011). For example, during 1960-2010 world population more than doubled from 3 billion to 7 billion. The nutritional demand of the population also grew swiftly during this period from an average of about 2000 calories per day per person in 1960 to nearly 3000 calories per day per person in 2010. The food demand of increased population along with increased nutritional demand during this period (1960-2010) was met by the “green revolution” which more than tripled the food production; even though croplands decreased from about 0.43 ha/capita to 0.26 ha/capita (FAO, 2009). The increase in food production during the green revolution was the result of factors such as: (a) expansion in irrigated areas which increased from 130 Mha in 1960s to 278.4 Mha in year 2000 (Siebert et al., 2006) or 399 Mha when you do not consider cropping intensity (Thenkabail et al., 2009a, 2009b, 2009c) or 467 Mha when you consider cropping intensity (Thenkabail et al., 2009a; Thenkabail et al., 2009c); (b) increase in yield and per capita food production (e.g., cereal production

  1. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  2. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  3. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  4. UGTA Photograph Database

    SciTech Connect

    NSTec Environmental Restoration

    2009-04-20

    One of the advantages of the Nevada Test Site (NTS) is that most of the geologic and hydrologic features such as hydrogeologic units (HGUs), hydrostratigraphic units (HSUs), and faults, which are important aspects of flow and transport modeling, are exposed at the surface somewhere in the vicinity of the NTS and thus are available for direct observation. However, due to access restrictions and the remote locations of many of the features, most Underground Test Area (UGTA) participants cannot observe these features directly in the field. Fortunately, National Security Technologies, LLC, geologists and their predecessors have photographed many of these features through the years. During fiscal year 2009, work was done to develop an online photograph database for use by the UGTA community. Photographs were organized, compiled, and imported into Adobe® Photoshop® Elements 7. The photographs were then assigned keyword tags such as alteration type, HGU, HSU, location, rock feature, rock type, and stratigraphic unit. Some fully tagged photographs were then selected and uploaded to the UGTA website. This online photograph database provides easy access for all UGTA participants and can help “ground truth” their analytical and modeling tasks. It also provides new participants a resource to more quickly learn the geology and hydrogeology of the NTS.

  5. Developing Database Files for Student Use.

    ERIC Educational Resources Information Center

    Warner, Michael

    1988-01-01

    Presents guidelines for creating student database files that supplement classroom teaching. Highlights include determining educational objectives, planning the database with computer specialists and subject area specialists, data entry, and creating student worksheets. Specific examples concerning elements of the periodic table and…

  6. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  7. Online Database Coverage of Forensic Medicine.

    ERIC Educational Resources Information Center

    Snow, Bonnie; Ifshin, Steven L.

    1984-01-01

    Online seaches of sample topics in the area of forensic medicine were conducted in the following life science databases: Biosis Previews, Excerpta Medica, Medline, Scisearch, and Chemical Abstracts Search. Search outputs analyzed according to criteria of recall, uniqueness, overlap, and utility reveal the need for a cross-database approach to…

  8. The Status of Statewide Subscription Databases

    ERIC Educational Resources Information Center

    Krueger, Karla S.

    2012-01-01

    This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…

  9. Petrophysical database of Uganda

    NASA Astrophysics Data System (ADS)

    Ruotoistenmäki, Tapio; Birungi, Nelson R.

    2015-06-01

    The petrophysical database of Uganda contains data on ca. 5800 rock samples collected and analyzed during 2009-2012 in international geological and geophysical projects covering the main part of the land area of Uganda. The parameters included are the susceptibilities and densities of all available field samples. Susceptibilities were measured from the samples from three directions. Using these parameters, we also calculated the ratios of susceptibility maxima/minima reflecting direction homogeneity of magnetic minerals, and estimated the iron content of paramagnetic samples and the magnetite content of ferrimagnetic samples. Statistical and visual analysis of the petrophysical data of Uganda demonstrated their wide variation, thus emphasizing their importance in analyzing the bedrock variations in three dimensions. Using the density-susceptibility diagram, the data can be classified into six main groups: 1. A low density and susceptibility group, consisting of sedimentary and altered rocks. 2. Low-susceptibility, felsic rocks (e.g. quartzites and metasandstones). 3. Paramagnetic, felsic rocks (e.g. granites). 4. Ferrimagnetic, magnetite-containing felsic rocks (e.g. granites). 5. Paramagnetic mafic rocks (e.g. amphibolites and dolerites). 6. Ferrimagnetic, mafic rocks containing magnetite and high-density mafic minerals (mainly dolerites). Moreover, analysis revealed that the parameter distributions of even a single rock type (e.g. granites) can be very variable, forming separate clusters. This demonstrates that the simple calculation of density or susceptibility averages of rock types can be highly erratic. For example, the average can lie between two groups, where only few, if any, samples exist. Therefore, estimation of the representative density and susceptibility must be visually verified from these diagrams. The areal distribution of parameters and their calculated derivatives generally correlate well with the regional distribution of lithological and

  10. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  11. Network II Database

    1994-11-07

    The Oak Ridge National Laboratory (ORNL) Rail and Barge Network II Database is a representation of the rail and barge system of the United States. The network is derived from the Federal Rail Administration (FRA) rail database.

  12. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  13. Household Products Database: Pesticides

    MedlinePlus

    ... Names Types of Products Manufacturers Ingredients About the Database FAQ Product Recalls Help Glossary Contact Us More ... holders. Information is extracted from Consumer Product Information Database ©2001-2015 by DeLima Associates. All rights reserved. ...

  14. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  15. Scopus database: a review

    PubMed Central

    Burnham, Judy F

    2006-01-01

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs. PMID:16522216

  16. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  17. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  18. The NCBI Taxonomy database.

    PubMed

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  19. Environmental databases and other computerized information tools

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    1995-01-01

    Increasing environmental legislation has brought about the development of many new environmental databases and software application packages to aid in the quest for environmental compliance. These databases and software packages are useful tools and applicable to a wide range of environmental areas from atmospheric modeling to materials replacement technology. The great abundance of such products and services can be very overwhelming when trying to identify the tools which best meet specific needs. This paper will discuss the types of environmental databases and software packages available. This discussion will also encompass the affected environmental areas of concern, product capabilities, and hardware requirements for product utilization.

  20. A Chronostratigraphic Relational Database Ontology

    NASA Astrophysics Data System (ADS)

    Platon, E.; Gary, A.; Sikora, P.

    2005-12-01

    A chronostratigraphic research database was donated by British Petroleum to the Stratigraphy Group at the Energy and Geoscience Institute (EGI), University of Utah. These data consists of over 2,000 measured sections representing over three decades of research into the application of the graphic correlation method. The data are global and includes both microfossil (foraminifera, calcareous nannoplankton, spores, pollen, dinoflagellate cysts, etc) and macrofossil data. The objective of the donation was to make the research data available to the public in order to encourage additional chronostratigraphy studies, specifically regarding graphic correlation. As part of the National Science Foundation's Cyberinfrastructure for the Geosciences (GEON) initiative these data have been made available to the public at http://css.egi.utah.edu. To encourage further research using the graphic correlation method, EGI has developed a software package, StrataPlot that will soon be publicly available from the GEON website as a standalone software download. The EGI chronostratigraphy research database, although relatively large, has many data holes relative to some paleontological disciplines and geographical areas, so the challenge becomes how do we expand the data available for chronostratigrahic studies using graphic correlation. There are several public or soon-to-be public databases available to chronostratigraphic research, but they have their own data structures and modes of presentation. The heterogeneous nature of these database schemas hinders their integration and makes it difficult for the user to retrieve and consolidate potentially valuable chronostratigraphic data. The integration of these data sources would facilitate rapid and comprehensive data searches, thus helping advance studies in chronostratigraphy. The GEON project will host a number of databases within the geology domain, some of which contain biostratigraphic data. Ontologies are being developed to provide

  1. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  2. Geochronology Database for Central Colorado

    USGS Publications Warehouse

    Klein, T.L.; Evans, K.V.; deWitt, E.H.

    2010-01-01

    This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.

  3. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  4. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  5. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  6. Online Bibliographic Searching in the Humanities Databases: An Introduction.

    ERIC Educational Resources Information Center

    Suresh, Raghini S.

    Numerous easily accessible databases cover almost every subject area in the humanities. The principal database resources in the humanities are described. There are two major database vendors for humanities information: BRS (Bibliographic Retrieval Services) and DIALOG Information Services, Inc. As an introduction to online searching, this article…

  7. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  8. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  9. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  10. Databases for LDEF results

    NASA Technical Reports Server (NTRS)

    Bohnhoff-Hlavacek, Gail

    1992-01-01

    One of the objectives of the team supporting the LDEF Systems and Materials Special Investigative Groups is to develop databases of experimental findings. These databases identify the hardware flown, summarize results and conclusions, and provide a system for acknowledging investigators, tracing sources of data, and future design suggestions. To date, databases covering the optical experiments, and thermal control materials (chromic acid anodized aluminum, silverized Teflon blankets, and paints) have been developed at Boeing. We used the Filemaker Pro software, the database manager for the Macintosh computer produced by the Claris Corporation. It is a flat, text-retrievable database that provides access to the data via an intuitive user interface, without tedious programming. Though this software is available only for the Macintosh computer at this time, copies of the databases can be saved to a format that is readable on a personal computer as well. Further, the data can be exported to more powerful relational databases, capabilities, and use of the LDEF databases and describe how to get copies of the database for your own research.

  11. Tank Characterization Database (TCD) Data Dictionary: Version 4.0

    SciTech Connect

    1996-04-01

    This document is the data dictionary for the tank characterization database (TCD) system and contains information on the data model and SYBASE{reg_sign} database structure. The first two parts of this document are subject areas based on the two different areas of the (TCD) database: sample analysis and waste inventory. Within each subject area is an alphabetical list of all the database tables contained in the subject area. Within each table defintiion is a brief description of the table and alist of field names and attributes. The third part, Field Descriptions, lists all field names in the data base alphabetically.

  12. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  13. Database in Artificial Intelligence.

    ERIC Educational Resources Information Center

    Wilkinson, Julia

    1986-01-01

    Describes a specialist bibliographic database of literature in the field of artificial intelligence created by the Turing Institute (Glasgow, Scotland) using the BRS/Search information retrieval software. The subscription method for end-users--i.e., annual fee entitles user to unlimited access to database, document provision, and printed awareness…

  14. BioImaging Database

    SciTech Connect

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  15. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  16. Online Database Searching Workbook.

    ERIC Educational Resources Information Center

    Littlejohn, Alice C.; Parker, Joan M.

    Designed primarily for use by first-time searchers, this workbook provides an overview of online searching. Following a brief introduction which defines online searching, databases, and database producers, five steps in carrying out a successful search are described: (1) identifying the main concepts of the search statement; (2) selecting a…

  17. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  18. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  19. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  20. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  1. First Look: TRADEMARKSCAN Database.

    ERIC Educational Resources Information Center

    Fernald, Anne Conway; Davidson, Alan B.

    1984-01-01

    Describes database produced by Thomson and Thomson and available on Dialog which contains over 700,000 records representing all active federal trademark registrations and applications for registrations filed in United States Patent and Trademark Office. A typical record, special features, database applications, learning to use TRADEMARKSCAN, and…

  2. Dictionary as Database.

    ERIC Educational Resources Information Center

    Painter, Derrick

    1996-01-01

    Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)

  3. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  4. Build Your Own Database.

    ERIC Educational Resources Information Center

    Jacso, Peter; Lancaster, F. W.

    This book is intended to help librarians and others to produce databases of better value and quality, especially if they have had little previous experience in database construction. Drawing upon almost 40 years of experience in the field of information retrieval, this book emphasizes basic principles and approaches rather than in-depth and…

  5. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  6. Database Searching by Managers.

    ERIC Educational Resources Information Center

    Arnold, Stephen E.

    Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…

  7. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  8. Assignment to database industy

    NASA Astrophysics Data System (ADS)

    Abe, Kohichiroh

    Various kinds of databases are considered to be essential part in future large sized systems. Information provision only by databases is also considered to be growing as the market becomes mature. This paper discusses how such circumstances have been built and will be developed from now on.

  9. Cascadia Tsunami Deposit Database

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce; Gelfenbaum, Guy; Peterson, Curt

    2003-01-01

    The Cascadia Tsunami Deposit Database contains data on the location and sedimentological properties of tsunami deposits found along the Cascadia margin. Data have been compiled from 52 studies, documenting 59 sites from northern California to Vancouver Island, British Columbia that contain known or potential tsunami deposits. Bibliographical references are provided for all sites included in the database. Cascadia tsunami deposits are usually seen as anomalous sand layers in coastal marsh or lake sediments. The studies cited in the database use numerous criteria based on sedimentary characteristics to distinguish tsunami deposits from sand layers deposited by other processes, such as river flooding and storm surges. Several studies cited in the database contain evidence for more than one tsunami at a site. Data categories include age, thickness, layering, grainsize, and other sedimentological characteristics of Cascadia tsunami deposits. The database documents the variability observed in tsunami deposits found along the Cascadia margin.

  10. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  11. The world bacterial biogeography and biodiversity through databases: a case study of NCBI Nucleotide Database and GBIF Database.

    PubMed

    Selama, Okba; James, Phillip; Nateche, Farida; Wellington, Elizabeth M H; Hacène, Hocine

    2013-01-01

    Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record). These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  12. Comparison of Savannah River Site`s meteorological databases

    SciTech Connect

    Weber, A.H.

    1993-07-01

    A five-year meteorological database from the 61-meter, H-Area tower for the period 1987--1991 was compared to an earlier database for the period 1982--1986. The amount of invalid data for the newer 87--91 database was one third that for the earlier database. The data recovery percentage for the last four years of the 87-91 database was well above 90%. Considerable effort was necessary to fill in for missing data periods for the newer database for the H-Area tower. Therefore, additional databases that have been prepared for the remaining SRS meteorological towers have had missing and erroneous data flagged, but not replaced. The F-Area tower`s database was used for cross-comparison purposes because of its proximity to H Area. The primary purpose of this report is to compare the H-Tower databases for 82-86 and 87-91. Statistical methods enable the use of probability statements to be made concerning the hypothesis of no differences between the distributions of the two time periods, assuming each database is a random sample from its respective distribution. This assumption is required for the statistical tests to be valid. A number of statistical comparisons can be made between the two data sets, even though the 82-86 database exist only as distributions of frequency and mean speed.

  13. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  14. Hazard Analysis Database Report

    SciTech Connect

    GAULT, G.W.

    1999-10-13

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for the Tank Waste Remediation System (TWRS) Final Safety Analysis Report (FSAR). The FSAR is part of the approved TWRS Authorization Basis (AB). This document describes, identifies, and defines the contents and structure of the TWRS FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The TWRS Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The database supports the preparation of Chapters 3,4, and 5 of the TWRS FSAR and the USQ process and consists of two major, interrelated data sets: (1) Hazard Evaluation Database--Data from the results of the hazard evaluations; and (2) Hazard Topography Database--Data from the system familiarization and hazard identification.

  15. Database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.

    1991-01-01

    A propagation researcher or a systems engineer who intends to use the results of a propagation experiment is generally faced with various database tasks such as the selection of the computer software, the hardware, and the writing of the programs to pass the data through the models of interest. This task is repeated every time a new experiment is conducted or the same experiment is carried out at a different location generating different data. Thus the users of this data have to spend a considerable portion of their time learning how to implement the computer hardware and the software towards the desired end. This situation may be facilitated considerably if an easily accessible propagation database is created that has all the accepted (standardized) propagation phenomena models approved by the propagation research community. Also, the handling of data will become easier for the user. Such a database construction can only stimulate the growth of the propagation research it if is available to all the researchers, so that the results of the experiment conducted by one researcher can be examined independently by another, without different hardware and software being used. The database may be made flexible so that the researchers need not be confined only to the contents of the database. Another way in which the database may help the researchers is by the fact that they will not have to document the software and hardware tools used in their research since the propagation research community will know the database already. The following sections show a possible database construction, as well as properties of the database for the propagation research.

  16. The Amma-Sat Database

    NASA Astrophysics Data System (ADS)

    Ramage, K.; Desbois, M.; Eymard, L.

    2004-12-01

    The African Monsoon Multidisciplinary Analysis project is a French initiative, which aims at identifying and analysing in details the multidisciplinary and multi-scales processes that lead to a better understanding of the physical mechanisms linked to the African Monsoon. The main components of the African Monsoon are: Atmospheric Dynamics, the Continental Water Cycle, Atmospheric Chemistry, Oceanic and Continental Surface Conditions. Satellites contribute to various objectives of the project both for process analysis and for large scale-long term studies: some series of satellites (METEOSAT, NOAA,.) have been flown for more than 20 years, ensuring a good quality monitoring of some of the West African atmosphere and surface characteristics. Moreover, several recent missions, and several projects will strongly improve and complement this survey. The AMMA project offers an opportunity to develop the exploitation of satellite data and to make collaboration between specialist and non-specialist users. In this purpose databases are being developed to collect all past and future satellite data related to the African Monsoon. It will then be possible to compare different types of data from different resolution, to validate satellite data with in situ measurements or numerical simulations. AMMA-SAT database main goal is to offer an easy access to satellite data to the AMMA scientific community. The database contains geophysical products estimated from operational or research algorithms and covering the different components of the AMMA project. Nevertheless, the choice has been made to group data within pertinent scales rather than within their thematic. In this purpose, five regions of interest where defined to extract the data: An area covering Tropical Atlantic and Africa for large scale studies, an area covering West Africa for mesoscale studies and three local areas surrounding sites of in situ observations. Within each of these regions satellite data are projected on

  17. International Comparisions Database

    National Institute of Standards and Technology Data Gateway

    International Comparisions Database (Web, free access)   The International Comparisons Database (ICDB) serves the U.S. and the Inter-American System of Metrology (SIM) with information based on Appendices B (International Comparisons), C (Calibration and Measurement Capabilities) and D (List of Participating Countries) of the Comit� International des Poids et Mesures (CIPM) Mutual Recognition Arrangement (MRA). The official source of the data is The BIPM key comparison database. The ICDB provides access to results of comparisons of measurements and standards organized by the consultative committees of the CIPM and the Regional Metrology Organizations.

  18. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  19. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuaki; Shimura, Kazuki; Monma, Yoshio; Sakamoto, Masao; Morishita, Hiroshi; Kanazawa, Kenji

    The Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST/NRIM Materials Strength Database for Engineering Steels and Alloys (JICST ME) in this March (1990). This database has been developed under the joint research between JICST and the National Research Institute for Metals (NRIM). It provides material strength data (creep, fatigue, etc.) of engineering steels and alloys. It is able to search and display on-line, and to analyze the searched data statistically and plot the result on graphic display. The database system and the data in JICST ME are described.

  20. Hybrid Terrain Database

    NASA Technical Reports Server (NTRS)

    Arthur, Trey

    2006-01-01

    A prototype hybrid terrain database is being developed in conjunction with other databases and with hardware and software that constitute subsystems of aerospace cockpit display systems (known in the art as synthetic vision systems) that generate images to increase pilots' situation awareness and eliminate poor visibility as a cause of aviation accidents. The basic idea is to provide a clear view of the world around an aircraft by displaying computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information.

  1. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  2. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  3. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  4. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  5. Navigating public microarray databases.

    PubMed

    Penkett, Christopher J; Bähler, Jürg

    2004-01-01

    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources.

  6. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  7. TREATABILITY DATABASE DESCRIPTION

    EPA Science Inventory

    The Drinking Water Treatability Database (TDB) presents referenced information on the control of contaminants in drinking water. It allows drinking water utilities, first responders to spills or emergencies, treatment process designers, research organizations, academics, regulato...

  8. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  9. Requirements Management Database

    2009-08-13

    This application is a simplified and customized version of the RBA and CTS databases to capture federal, site, and facility requirements, link to actions that must be performed to maintain compliance with their contractual and other requirements.

  10. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  11. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  12. Database computing in HEP

    NASA Technical Reports Server (NTRS)

    Day, C. T.; Loken, S.; Macfarlane, J. F.; May, E.; Lifka, D.; Lusk, E.; Price, L. E.; Baden, A.; Grossman, R.; Qin, X.

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors, I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototypes based on relational and object-oriented databases of CDF data samples.

  13. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  14. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  15. The Halophile protein database.

    PubMed

    Sharma, Naveen; Farooqi, Mohammad Samir; Chaturvedi, Krishna Kumar; Lal, Shashi Bhushan; Grover, Monendra; Rai, Anil; Pandey, Pankaj

    2014-01-01

    Halophilic archaea/bacteria adapt to different salt concentration, namely extreme, moderate and low. These type of adaptations may occur as a result of modification of protein structure and other changes in different cell organelles. Thus proteins may play an important role in the adaptation of halophilic archaea/bacteria to saline conditions. The Halophile protein database (HProtDB) is a systematic attempt to document the biochemical and biophysical properties of proteins from halophilic archaea/bacteria which may be involved in adaptation of these organisms to saline conditions. In this database, various physicochemical properties such as molecular weight, theoretical pI, amino acid composition, atomic composition, estimated half-life, instability index, aliphatic index and grand average of hydropathicity (Gravy) have been listed. These physicochemical properties play an important role in identifying the protein structure, bonding pattern and function of the specific proteins. This database is comprehensive, manually curated, non-redundant catalogue of proteins. The database currently contains 59 897 proteins properties extracted from 21 different strains of halophilic archaea/bacteria. The database can be accessed through link. Database URL: http://webapp.cabgrid.res.in/protein/

  16. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  17. Open systems and databases

    SciTech Connect

    Martire, G.S. ); Nuttall, D.J.H. )

    1993-05-01

    This paper is part of a series of papers invited by the IEEE POWER CONTROL CENTER WORKING GROUP concerning the changing designs of modern control centers. Papers invited by the Working Group discuss the following issues: Benefits of Openness, Criteria for Evaluating Open EMS Systems, Hardware Design, Configuration Management, Security, Project Management, Databases, SCADA, Inter- and Intra-System Communications and Man-Machine Interfaces,'' The goal of this paper is to provide an introduction to the issues pertaining to Open Systems and Databases.'' The intent is to assist understanding of some of the underlying factors that effect choices that must be made when selecting a database system for use in a control room environment. This paper describes and compares the major database information models which are in common use for database systems and provides an overview of SQL. A case for the control center community to follow the workings of the non-formal standards bodies is presented along with possible uses and the benefits of commercially available databases within the control center. The reasons behind the emergence of industry supported standards organizations such as the Open Software Foundation (OSF) and SQL Access are presented.

  18. The comprehensive peptaibiotics database.

    PubMed

    Stoppacher, Norbert; Neumann, Nora K N; Burgstaller, Lukas; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2013-05-01

    Peptaibiotics are nonribosomally biosynthesized peptides, which - according to definition - contain the marker amino acid α-aminoisobutyric acid (Aib) and possess antibiotic properties. Being known since 1958, a constantly increasing number of peptaibiotics have been described and investigated with a particular emphasis on hypocrealean fungi. Starting from the existing online 'Peptaibol Database', first published in 1997, an exhaustive literature survey of all known peptaibiotics was carried out and resulted in a list of 1043 peptaibiotics. The gathered information was compiled and used to create the new 'The Comprehensive Peptaibiotics Database', which is presented here. The database was devised as a software tool based on Microsoft (MS) Access. It is freely available from the internet at http://peptaibiotics-database.boku.ac.at and can easily be installed and operated on any computer offering a Windows XP/7 environment. It provides useful information on characteristic properties of the peptaibiotics included such as peptide category, group name of the microheterogeneous mixture to which the peptide belongs, amino acid sequence, sequence length, producing fungus, peptide subfamily, molecular formula, and monoisotopic mass. All these characteristics can be used and combined for automated search within the database, which makes The Comprehensive Peptaibiotics Database a versatile tool for the retrieval of valuable information about peptaibiotics. Sequence data have been considered as to December 14, 2012. PMID:23681723

  19. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  20. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  1. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  2. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  3. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  4. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  5. Using the Reactome Database

    PubMed Central

    Haw, Robin

    2012-01-01

    There is considerable interest in the bioinformatics community in creating pathway databases. The Reactome project (a collaboration between the Ontario Institute for Cancer Research, Cold Spring Harbor Laboratory, New York University Medical Center and the European Bioinformatics Institute) is one such pathway database and collects structured information on all the biological pathways and processes in the human. It is an expert-authored and peer-reviewed, curated collection of well-documented molecular reactions that span the gamut from simple intermediate metabolism to signaling pathways and complex cellular events. This information is supplemented with likely orthologous molecular reactions in mouse, rat, zebrafish, worm and other model organisms. This unit describes how to use the Reactome database to learn the steps of a biological pathway; navigate and browse through the Reactome database; identify the pathways in which a molecule of interest is involved; use the Pathway and Expression analysis tools to search the database for and visualize possible connections within user-supplied experimental data set and Reactome pathways; and the Species Comparison tool to compare human and model organism pathways. PMID:22700314

  6. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  7. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.

    2011-01-01

    With three missions outstanding, the Shuttle Hypervelocity Impact Database has nearly 3000 entries. The data is divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. Details and insights on the contents of the database including examples of descriptive statistics will be provided. Post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will also be discussed. Potential enhancements to the database structure and availability of the data for other researchers will be addressed in the Future Work section. A related database of returned surfaces from the International Space Station will also be introduced.

  8. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James I.; Christiansen, Eric I.; Lear, Dana M.

    2011-01-01

    With three flights remaining on the manifest, the shuttle impact hypervelocity database has over 2800 entries. The data is currently divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. The paper will provide details and insights on the contents of the database including examples of descriptive statistics using the impact data. A discussion of post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will be presented. Future work to be discussed will be possible enhancements to the database structure and availability of the data for other researchers. A related database of ISS returned surfaces that are under development will also be introduced.

  9. Computer Databases: A Survey; Part 1: General and News Databases.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1986-01-01

    Descriptions and evaluations of 13 databases devoted to computer information are presented by type under four headings: bibliographic databases; daily news services; online computer magazines; and specialized computer industry databases. Information on database producers, starting date of file, update frequency, vendors, and prices is summarized…

  10. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  11. Patent Databases. . .A Survey of What Is Available from DIALOG, Questel, SDC, Pergamon and INPADOC.

    ERIC Educational Resources Information Center

    Kulp, Carol S.

    1984-01-01

    Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…

  12. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  13. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  14. Protein Structure Databases.

    PubMed

    Laskowski, Roman A

    2016-01-01

    Web-based protein structure databases come in a wide variety of types and levels of information content. Those having the most general interest are the various atlases that describe each experimentally determined protein structure and provide useful links, analyses, and schematic diagrams relating to its 3D structure and biological function. Also of great interest are the databases that classify 3D structures by their folds as these can reveal evolutionary relationships which may be hard to detect from sequence comparison alone. Related to these are the numerous servers that compare folds-particularly useful for newly solved structures, and especially those of unknown function. Beyond these are a vast number of databases for the more specialized user, dealing with specific families, diseases, structural features, and so on. PMID:27115626

  15. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  16. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  17. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  18. National Ambient Radiation Database

    SciTech Connect

    Dziuban, J.; Sears, R.

    2003-02-25

    The U.S. Environmental Protection Agency (EPA) recently developed a searchable database and website for the Environmental Radiation Ambient Monitoring System (ERAMS) data. This site contains nationwide radiation monitoring data for air particulates, precipitation, drinking water, surface water and pasteurized milk. This site provides location-specific as well as national information on environmental radioactivity across several media. It provides high quality data for assessing public exposure and environmental impacts resulting from nuclear emergencies and provides baseline data during routine conditions. The database and website are accessible at www.epa.gov/enviro/. This site contains (1) a query for the general public which is easy to use--limits the amount of information provided, but includes the ability to graph the data with risk benchmarks and (2) a query for a more technical user which allows access to all of the data in the database, (3) background information on ER AMS.

  19. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  20. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  1. The Ribosomal Database Project.

    PubMed Central

    Maidak, B L; Larsen, N; McCaughey, M J; Overbeek, R; Olsen, G J; Fogel, K; Blandy, J; Woese, C R

    1994-01-01

    The Ribosomal Database Project (RDP) is a curated database that offers ribosome-related data, analysis services, and associated computer programs. The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams, and various software for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (rdp.life.uiuc.edu), electronic mail (server/rdp.life.uiuc.edu) and gopher (rdpgopher.life.uiuc.edu). The electronic mail server also provides ribosomal probe checking, approximate phylogenetic placement of user-submitted sequences, screening for chimeric nature of newly sequenced rRNAs, and automated alignment. PMID:7524021

  2. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  3. The Genopolis Microarray Database

    PubMed Central

    Splendiani, Andrea; Brandizi, Marco; Even, Gael; Beretta, Ottavio; Pavelka, Norman; Pelizzola, Mattia; Mayhaus, Manuel; Foti, Maria; Mauri, Giancarlo; Ricciardi-Castagnoli, Paola

    2007-01-01

    Background Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood. Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions. Results The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users. Conclusion The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local

  4. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  5. Low-Budget Graphic Databases.

    ERIC Educational Resources Information Center

    Mahoney, Dan

    1994-01-01

    Explains the use of a standard text-based database program (i.e., dBase III) to run external programs that display graphic files during a database session and reduces costs normally encountered when preparing a computer to run a graphical database. An example is given of a simple database with two fields. (LRW)

  6. TREC Document Database: Disk 4

    National Institute of Standards and Technology Data Gateway

    NIST TREC Document Database: Disk 4 (PC database for purchase)   NIST TREC Document Databases (Special Database 22) are distributed for the development and testing of information retrieval (IR) systems and related natural language processing research. The document collections consist of the full text of various newspaper and newswire articles plus government proceedings.

  7. TREC Document Database: Disk 5

    National Institute of Standards and Technology Data Gateway

    NIST TREC Document Database: Disk 5 (PC database for purchase)   NIST TREC Document Databases (Special Database 23) are distributed for the development and testing of information retrieval (IR) systems and related natural language processing research. The document collections consist of the full text of various newspaper and newswire articles plus government proceedings.

  8. Triatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 117 Triatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  9. Hydrocarbon Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  10. Diatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 114 Diatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.

  11. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  12. The Ribosomal Database Project

    NASA Technical Reports Server (NTRS)

    Olsen, G. J.; Overbeek, R.; Larsen, N.; Marsh, T. L.; McCaughey, M. J.; Maciukenas, M. A.; Kuan, W. M.; Macke, T. J.; Xing, Y.; Woese, C. R.

    1992-01-01

    The Ribosomal Database Project (RDP) complies ribosomal sequences and related data, and redistributes them in aligned and phylogenetically ordered form to its user community. It also offers various software packages for handling, analyzing and displaying sequences. In addition, the RDP offers (or will offer) certain analytic services. At present the project is in an intermediate stage of development.

  13. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  14. Patent Family Databases.

    ERIC Educational Resources Information Center

    Simmons, Edlyn S.

    1985-01-01

    Reports on retrieval of patent information online and includes definition of patent family, basic and equivalent patents, "parents and children" applications, designated states, patent family databases--International Patent Documentation Center, World Patents Index, APIPAT (American Petroleum Institute), CLAIMS (IFI/Plenum). A table noting country…

  15. LQTS gene LOVD database.

    PubMed

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-11-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2,SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently(KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database(http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians. PMID:20809527

  16. LQTS gene LOVD database.

    PubMed

    Zhang, Tao; Moss, Arthur; Cong, Peikuan; Pan, Min; Chang, Bingxi; Zheng, Liangrong; Fang, Quan; Zareba, Wojciech; Robinson, Jennifer; Lin, Changsong; Li, Zhongxiang; Wei, Junfang; Zeng, Qiang; Qi, Ming

    2010-11-01

    The Long QT Syndrome (LQTS) is a group of genetically heterogeneous disorders that predisposes young individuals to ventricular arrhythmias and sudden death. LQTS is mainly caused by mutations in genes encoding subunits of cardiac ion channels (KCNQ1, KCNH2,SCN5A, KCNE1, and KCNE2). Many other genes involved in LQTS have been described recently(KCNJ2, AKAP9, ANK2, CACNA1C, SCNA4B, SNTA1, and CAV3). We created an online database(http://www.genomed.org/LOVD/introduction.html) that provides information on variants in LQTS-associated genes. As of February 2010, the database contains 1738 unique variants in 12 genes. A total of 950 variants are considered pathogenic, 265 are possible pathogenic, 131 are unknown/unclassified, and 292 have no known pathogenicity. In addition to these mutations collected from published literature, we also submitted information on gene variants, including one possible novel pathogenic mutation in the KCNH2 splice site found in ten Chinese families with documented arrhythmias. The remote user is able to search the data and is encouraged to submit new mutations into the database. The LQTS database will become a powerful tool for both researchers and clinicians.

  17. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  18. Redis database administration tool

    SciTech Connect

    Martinez, J. J.

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  19. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  20. NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL

    EPA Science Inventory

    Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...

  1. Proteomics: Protein Identification Using Online Databases

    ERIC Educational Resources Information Center

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory…

  2. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  3. First Look--The Biobusiness Database.

    ERIC Educational Resources Information Center

    Cunningham, Ann Marie

    1986-01-01

    Presents overview prepared by producer of database newly available in 1985 that covers six broad subject areas: genetic engineering and bioprocessing, pharmaceuticals, medical technology and instrumentation, agriculture, energy and environment, and food and beverages. Background, indexing, record format, use of BioBusiness, and 1986 enhancements…

  4. The AMMA database

    NASA Astrophysics Data System (ADS)

    Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim

    2010-05-01

    The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can

  5. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  6. Tautomerism in large databases

    PubMed Central

    Sitzmann, Markus; Ihlenfeldt, Wolf-Dietrich

    2010-01-01

    We have used the Chemical Structure DataBase (CSDB) of the NCI CADD Group, an aggregated collection of over 150 small-molecule databases totaling 103.5 million structure records, to conduct tautomerism analyses on one of the largest currently existing sets of real (i.e. not computer-generated) compounds. This analysis was carried out using calculable chemical structure identifiers developed by the NCI CADD Group, based on hash codes available in the chemoinformatics toolkit CACTVS and a newly developed scoring scheme to define a canonical tautomer for any encountered structure. CACTVS’s tautomerism definition, a set of 21 transform rules expressed in SMIRKS line notation, was used, which takes a comprehensive stance as to the possible types of tautomeric interconversion included. Tautomerism was found to be possible for more than 2/3 of the unique structures in the CSDB. A total of 680 million tautomers were calculated from, and including, the original structure records. Tautomerism overlap within the same individual database (i.e. at least one other entry was present that was really only a different tautomeric representation of the same compound) was found at an average rate of 0.3% of the original structure records, with values as high as nearly 2% for some of the databases in CSDB. Projected onto the set of unique structures (by FICuS identifier), this still occurred in about 1.5% of the cases. Tautomeric overlap across all constituent databases in CSDB was found for nearly 10% of the records in the collection. PMID:20512400

  7. MEROPS: the peptidase database.

    PubMed

    Rawlings, N D; Barrett, A J

    1999-01-01

    The MEROPS database (http://www.bi.bbsrc.ac.uk/Merops/Merops.+ ++htm) provides a catalogue and structure-based classification of peptidases (i.e. all proteolytic enzymes). This is a large group of proteins (approximately 2% of all gene products) that is of particular importance in medicine and biotechnology. An index of the peptidases by name or synonym gives access to a set of files termed PepCards each of which provides information on a single peptidase. Each card file contains information on classification and nomenclature, and hypertext links to the relevant entries in online databases for human genetics, protein and nucleic acid sequence data and tertiary structure. Another index provides access to the PepCards by organism name so that the user can retrieve all known peptidases from a particular species. The peptidases are classified into families on the basis of statistically significant similarities between the protein sequences in the part termed the 'peptidase unit' that is most directly responsible for activity. Families that are thought to have common evolutionary origins and are known or expected to have similar tertiary folds are grouped into clans. The MEROPS database provides sets of files called FamCards and ClanCards describing the individual families and clans. Each FamCard document provides links to other databases for sequence motifs and secondary and tertiary structures, and shows the distribution of the family across the major kingdoms of living creatures. Release 3.03 of MEROPS contains 758 peptidases, 153 families and 22 clans. We suggest that the MEROPS database provides a model for a way in which a system of classification for a functional group of proteins can be developed and used as an organizational framework around which to assemble a variety of related information.

  8. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), Map

  9. Open geochemical database

    NASA Astrophysics Data System (ADS)

    Zhilin, Denis; Ilyin, Vladimir; Bashev, Anton

    2010-05-01

    We regard "geochemical data" as data on chemical parameters of the environment, linked with the geographical position of the corresponding point. Boosting development of global positioning system (GPS) and measuring instruments allows fast collecting of huge amounts of geochemical data. Presently they are published in scientific journals in text format, that hampers searching for information about particular places and meta-analysis of the data, collected by different researchers. Part of the information is never published. To make the data available and easy to find, it seems reasonable to elaborate an open database of geochemical information, accessible via Internet. It also seems reasonable to link the data with maps or space images, for example, from GoogleEarth service. For this purpose an open geochemical database is being elaborating (http://maps.sch192.ru). Any user after registration can upload geochemical data (position, type of parameter and value of the parameter) and edit them. Every user (including unregistered) can (a) extract the values of parameters, fulfilling desired conditions and (b) see the points, linked to GoogleEarth space image, colored according to a value of selected parameter. Then he can treat extracted values any way he likes. There are the following data types in the database: authors, points, seasons and parameters. Author is a person, who publishes the data. Every author can declare his own profile. A point is characterized by its geographical position and type of the object (i.e. river, lake etc). Value of parameters are linked to a point, an author and a season, when they were obtained. A user can choose a parameter to place on GoogleEarth space image and a scale to color the points on the image according to the value of a parameter. Currently (December, 2009) the database is under construction, but several functions (uploading data on pH and electrical conductivity and placing colored points onto GoogleEarth space image) are

  10. NASA aerospace database subject scope: An overview

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Outlined here is the subject scope of the NASA Aerospace Database, a publicly available subset of the NASA Scientific and Technical (STI) Database. Topics of interest to NASA are outlined and placed within the framework of the following broad aerospace subject categories: aeronautics, astronautics, chemistry and materials, engineering, geosciences, life sciences, mathematical and computer sciences, physics, social sciences, space sciences, and general. A brief discussion of the subject scope is given for each broad area, followed by a similar explanation of each of the narrower subject fields that follow. The subject category code is listed for each entry.

  11. ITER solid breeder blanket materials database

    SciTech Connect

    Billone, M.C.; Dienst, W.; Flament, T.; Lorenzetto, P.; Noda, K.; Roux, N.

    1993-11-01

    The databases for solid breeder ceramics (Li{sub 2},O, Li{sub 4}SiO{sub 4}, Li{sub 2}ZrO{sub 3} and LiAlO{sub 2}) and beryllium multiplier material are critically reviewed and evaluated. Emphasis is placed on physical, thermal, mechanical, chemical stability/compatibility, tritium, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Correlations are selected for design analysis and compared to the database. Areas for future research and development in blanket materials technology are highlighted and prioritized.

  12. Comparison between satellite wildfire databases in Europe

    NASA Astrophysics Data System (ADS)

    Amraoui, Malik; Pereira, Mário; DaCamara, Carlos

    2013-04-01

    For Europe, several databases of wildfires based on the satellite imagery are currently available and being used to conduct various studies and produce official reports. The European Forest Fire Information System (EFFIS) burned area perimeters database comprises fires with burnt area greater than 1.0 ha occurred in the Europe countries during the 2000 - 2011 period. The MODIS Burned Area Product (MCD45A1) is a monthly global Level 3 gridded 500m product containing per-pixel burning, quality information, and tile-level metadata. The Burned Area Product was developed by the MODIS Fire Team at the University of Maryland and is available April 2000 onwards. Finally, for Portugal the National Forest Authority (AFN) discloses the national mapping of burned areas of the years 1990 to 2011, based on Landsat imagery which accounts for fires larger than 5.0 ha. This study main objectives are: (i) provide a comprehensive description of the datasets, its limitations and potential; (ii) do preliminary statistics on the data; and, (iii) to compare the MODIS and EFFIS satellite wildfires databases throughout/across the entire European territory, based on indicators such as the spatial location of the burned areas and the extent of area burned annually and complement the analysis for Portugal will the inclusion of database AFN. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  13. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  14. The apoptosis database.

    PubMed

    Doctor, K S; Reed, J C; Godzik, A; Bourne, P E

    2003-06-01

    The apoptosis database is a public resource for researchers and students interested in the molecular biology of apoptosis. The resource provides functional annotation, literature references, diagrams/images, and alternative nomenclatures on a set of proteins having 'apoptotic domains'. These are the distinctive domains that are often, if not exclusively, found in proteins involved in apoptosis. The initial choice of proteins to be included is defined by apoptosis experts and bioinformatics tools. Users can browse through the web accessible lists of domains, proteins containing these domains and their associated homologs. The database can also be searched by sequence homology using basic local alignment search tool, text word matches of the annotation, and identifiers for specific records. The resource is available at http://www.apoptosis-db.org and is updated on a regular basis.

  15. Medical Image Databases

    PubMed Central

    Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James

    1997-01-01

    Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338

  16. Real Time Baseball Database

    NASA Astrophysics Data System (ADS)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  17. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  18. Clinical Genomic Database

    PubMed Central

    Solomon, Benjamin D.; Nguyen, Anh-Dao; Bear, Kelly A.; Wolfsberg, Tyra G.

    2013-01-01

    Technological advances have greatly increased the availability of human genomic sequencing. However, the capacity to analyze genomic data in a clinically meaningful way lags behind the ability to generate such data. To help address this obstacle, we reviewed all conditions with genetic causes and constructed the Clinical Genomic Database (CGD) (http://research.nhgri.nih.gov/CGD/), a searchable, freely Web-accessible database of conditions based on the clinical utility of genetic diagnosis and the availability of specific medical interventions. The CGD currently includes a total of 2,616 genes organized clinically by affected organ systems and interventions (including preventive measures, disease surveillance, and medical or surgical interventions) that could be reasonably warranted by the identification of pathogenic mutations. To aid independent analysis and optimize new data incorporation, the CGD also includes all genetic conditions for which genetic knowledge may affect the selection of supportive care, informed medical decision-making, prognostic considerations, reproductive decisions, and allow avoidance of unnecessary testing, but for which specific interventions are not otherwise currently available. For each entry, the CGD includes the gene symbol, conditions, allelic conditions, clinical categorization (for both manifestations and interventions), mode of inheritance, affected age group, description of interventions/rationale, links to other complementary databases, including databases of variants and presumed pathogenic mutations, and links to PubMed references (>20,000). The CGD will be regularly maintained and updated to keep pace with scientific discovery. Further content-based expert opinions are actively solicited. Eventually, the CGD may assist the rapid curation of individual genomes as part of active medical care. PMID:23696674

  19. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  20. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  1. The Ribonuclease P Database.

    PubMed

    Brown, J W

    1999-01-01

    Ribonuclease P is responsible for the 5'-maturation of tRNA precursors. Ribonuclease P is a ribonucleoprotein, and in bacteria (and some Archaea) the RNA subunit alone is catalytically active in vitro, i.e. it is a ribozyme. The Ribonuclease P Database is a compilation of ribonuclease P sequences, sequence alignments, secondary structures, three-dimensional models and accessory information, available via the World Wide Web at the following URL: http://www.mbio.ncsu.edu/RNaseP/home .html

  2. Online Information. Selected Databases at the New York State Library.

    ERIC Educational Resources Information Center

    New York State Library, Albany. Database Services.

    This brochure describes the online information services at the New York State Library, which has online access to over 250 databases covering a broad range of subject areas, including current events, law, science, medicine, public affairs, grants, business, computer technology, education, social welfare, and humanities. Many of these databases are…

  3. Subject Retrieval from Full-Text Databases in the Humanities

    ERIC Educational Resources Information Center

    East, John W.

    2007-01-01

    This paper examines the problems involved in subject retrieval from full-text databases of secondary materials in the humanities. Ten such databases were studied and their search functionality evaluated, focusing on factors such as Boolean operators, document surrogates, limiting by subject area, proximity operators, phrase searching, wildcards,…

  4. CD-ROM Databases: A Survey of Commercial Publishing.

    ERIC Educational Resources Information Center

    Nicholls, Paul; Sutherland, Trish

    1992-01-01

    Discusses the results of a survey of commercial CD-ROM databases and compares them with previous annual surveys. Topics discussed include the rapid growth in number of available titles; types of databases; subject areas; hardware requirements; frequency of updates; prices; and CD-ROM's potential as a publishing medium. (seven references) (LRW)

  5. The Cambridge Structural Database.

    PubMed

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  6. The Cambridge Structural Database

    PubMed Central

    Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.

    2016-01-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  7. ARTI Refrigerant Database

    SciTech Connect

    Cain, J.M. , Great Falls, VA )

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  8. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-04-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates. Citations in this report are divided into the following topics: thermophysical properties; materials compatibility; lubricants and tribology; application data; safety; test and analysis methods; impacts; regulatory actions; substitute refrigerants; identification; absorption and adsorption; research programs; and miscellaneous documents. Information is also presented on ordering instructions for the computerized version.

  9. Curcumin Resource Database

    PubMed Central

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. Database URL: http://www.crdb.in PMID:26220923

  10. The Cambridge Structural Database.

    PubMed

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  11. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  12. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  13. Computer Databases: A Survey. Part 3: Product Databases.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1987-01-01

    Describes five online databases that focus on computer products, primarily software and microcomputing hardware, and compares the databases in terms of record content, product coverage, vertical market coverage, currency, availability, and price. Sample records and searches are provided, as well as a directory of product databases. (CLB)

  14. SmallSat Database

    NASA Technical Reports Server (NTRS)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  15. A Case for Database Filesystems

    SciTech Connect

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  16. High Temperature Superconducting Materials Database

    National Institute of Standards and Technology Data Gateway

    SRD 149 NIST High Temperature Superconducting Materials Database (Web, free access)   The NIST High Temperature Superconducting Materials Database (WebHTS) provides evaluated thermal, mechanical, and superconducting property data for oxides and other nonconventional superconductors.

  17. Dietary Supplement Label Database (DSLD)

    MedlinePlus

    ... Print Report Error T he Dietary Supplement Label Database (DSLD) is a joint project of the National ... participants in the latest survey in the DSLD database (NHANES): The search options: Quick Search, Browse Dietary ...

  18. ThermoData Engine Database

    National Institute of Standards and Technology Data Gateway

    SRD 103 NIST ThermoData Engine Database (PC database for purchase)   ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.

  19. Hydrogen Leak Detection Sensor Database

    NASA Technical Reports Server (NTRS)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  20. Corruption of genomic databases with anomalous sequence.

    PubMed Central

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-01-01

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%. PMID:1614861

  1. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  2. Corruption of genomic databases with anomalous sequence.

    PubMed

    Lamperti, E D; Kittelberger, J M; Smith, T F; Villa-Komaroff, L

    1992-06-11

    We describe evidence that DNA sequences from vectors used for cloning and sequencing have been incorporated accidentally into eukaryotic entries in the GenBank database. These incorporations were not restricted to one type of vector or to a single mechanism. Many minor instances may have been the result of simple editing errors, but some entries contained large blocks of vector sequence that had been incorporated by contamination or other accidents during cloning. Some cases involved unusual rearrangements and areas of vector distant from the normal insertion sites. Matches to vector were found in 0.23% of 20,000 sequences analyzed in GenBank Release 63. Although the possibility of anomalous sequence incorporation has been recognized since the inception of GenBank and should be easy to avoid, recent evidence suggests that this problem is increasing more quickly than the database itself. The presence of anomalous sequence may have serious consequences for the interpretation and use of database entries, and will have an impact on issues of database management. The incorporated vector fragments described here may also be useful for a crude estimate of the fidelity of sequence information in the database. In alignments with well-defined ends, the matching sequences showed 96.8% identity to vector; when poorer matches with arbitrary limits were included, the aggregate identity to vector sequence was 94.8%.

  3. Scientific and Technical Document Database

    National Institute of Standards and Technology Data Gateway

    NIST Scientific and Technical Document Database (PC database for purchase)   The images in NIST Special Database 20 contain a very rich set of graphic elements from scientific and technical documents, such as graphs, tables, equations, two column text, maps, pictures, footnotes, annotations, and arrays of such elements.

  4. Microbial Properties Database Editor Tutorial

    EPA Science Inventory

    A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbial-relevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pat...

  5. A Forest Vegetation Database for Western Oregon

    USGS Publications Warehouse

    Busing, Richard T.

    2004-01-01

    Data on forest vegetation in western Oregon were assembled for 2323 ecological survey plots. All data were from fixed-radius plots with the standardized design of the Current Vegetation Survey (CVS) initiated in the early 1990s. For each site, the database includes: 1) live tree density and basal area of common tree species, 2) total live tree density, basal area, estimated biomass, and estimated leaf area; 3) age of the oldest overstory tree examined, 4) geographic coordinates, 5) elevation, 6) interpolated climate variables, and 7) other site variables. The data are ideal for ecoregional analyses of existing vegetation.

  6. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  7. DOE technology information management system database study report

    SciTech Connect

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  8. High-integrity databases for helicopter operations

    NASA Astrophysics Data System (ADS)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  9. Databases as an information service

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  10. Construction of file database management

    SciTech Connect

    MERRILL,KYLE J.

    2000-03-01

    This work created a database for tracking data analysis files from multiple lab techniques and equipment stored on a central file server. Experimental details appropriate for each file type are pulled from the file header and stored in a searchable database. The database also stores specific location and self-directory structure for each data file. Queries can be run on the database according to file type, sample type or other experimental parameters. The database was constructed in Microsoft Access and Visual Basic was used for extraction of information from the file header.

  11. Database-assisted promoter analysis.

    PubMed

    Hehl, R; Wingender, E

    2001-06-01

    The analysis of regulatory sequences is greatly facilitated by database-assisted bioinformatic approaches. The TRANSFAC database contains information on transcription factors and their origins, functional properties and sequence-specific binding activities. Software tools enable us to screen the database with a given DNA sequence for interacting transcription factors. If a regulatory function is already attributed to this sequence then the database-assisted identification of binding sites for proteins or protein classes and subsequent experimental verification might establish functionally relevant sites within this sequence. The binding transcription factors and interacting factors might already be present in the database.

  12. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  13. Curcumin Resource Database.

    PubMed

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs.

  14. Curcumin Resource Database.

    PubMed

    Kumar, Anil; Chetia, Hasnahana; Sharma, Swagata; Kabiraj, Debajyoti; Talukdar, Narayan Chandra; Bora, Utpal

    2015-01-01

    Curcumin is one of the most intensively studied diarylheptanoid, Curcuma longa being its principal producer. This apart, a class of promising curcumin analogs has been generated in laboratories, aptly named as Curcuminoids which are showing huge potential in the fields of medicine, food technology, etc. The lack of a universal source of data on curcumin as well as curcuminoids has been felt by the curcumin research community for long. Hence, in an attempt to address this stumbling block, we have developed Curcumin Resource Database (CRDB) that aims to perform as a gateway-cum-repository to access all relevant data and related information on curcumin and its analogs. Currently, this database encompasses 1186 curcumin analogs, 195 molecular targets, 9075 peer reviewed publications, 489 patents and 176 varieties of C. longa obtained by extensive data mining and careful curation from numerous sources. Each data entry is identified by a unique CRDB ID (identifier). Furnished with a user-friendly web interface and in-built search engine, CRDB provides well-curated and cross-referenced information that are hyperlinked with external sources. CRDB is expected to be highly useful to the researchers working on structure as well as ligand-based molecular design of curcumin analogs. PMID:26220923

  15. National Geochronological Database

    USGS Publications Warehouse

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  16. Entrepreneurship Program Database.

    ERIC Educational Resources Information Center

    Ashmore, M. Catherine; Guzman, Geannina

    This publication contains a synthesis of information collected by the National Entrepreneurship Education Consortium on the efforts of local vocational education programs in the area of entrepreneurship education. The programs described represent all instructional levels and all areas of the country. A directory of programs listed by state is…

  17. Instruction manual for the Wahoo computerized database

    SciTech Connect

    Lasota, D.; Watts, K.

    1995-05-01

    As part of our research on the Lisburne Group, we have developed a powerful relational computerized database to accommodate the huge amounts of data generated by our multi-disciplinary research project. The Wahoo database has data files on petrographic data, conodont analyses, locality and sample data, well logs and diagenetic (cement) studies. Chapter 5 is essentially an instruction manual that summarizes some of the unique attributes and operating procedures of the Wahoo database. The main purpose of a database is to allow users to manipulate their data and produce reports and graphs for presentation. We present a variety of data tables in appendices at the end of this report, each encapsulating a small part of the data contained in the Wahoo database. All the data are sorted and listed by map index number and stratigraphic position (depth). The Locality data table (Appendix A) lists of the stratigraphic sections examined in our study. It gives names of study areas, stratigraphic units studied, locality information, and researchers. Most localities are keyed to a geologic map that shows the distribution of the Lisburne Group and location of our sections in ANWR. Petrographic reports (Appendix B) are detailed summaries of data the composition and texture of the Lisburne Group carbonates. The relative abundance of different carbonate grains (allochems) and carbonate texture are listed using symbols that portray data in a format similar to stratigraphic columns. This enables researchers to recognize trends in the evolution of the Lisburne carbonate platform and to check their paleoenvironmental interpretations in a stratigraphic context. Some of the figures in Chapter 1 were made using the Wahoo database.

  18. The CHIANTI atomic database

    NASA Astrophysics Data System (ADS)

    Young, P. R.; Dere, K. P.; Landi, E.; Del Zanna, G.; Mason, H. E.

    2016-04-01

    The freely available CHIANTI atomic database was first released in 1996 and has had a huge impact on the analysis and modeling of emissions from astrophysical plasmas. It contains data and software for modeling optically thin atom and positive ion emission from low density (≲1013 cm-3) plasmas from x-ray to infrared wavelengths. A key feature is that the data are assessed and regularly updated, with version 8 released in 2015. Atomic data for modeling the emissivities of 246 ions and neutrals are contained in CHIANTI, together with data for deriving the ionization fractions of all elements up to zinc. The different types of atomic data are summarized here and their formats discussed. Statistics on the impact of CHIANTI to the astrophysical community are given and examples of the diverse range of applications are presented.

  19. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  20. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  1. Oracle Database DBFS Hierarchical Storage Overview

    SciTech Connect

    Rivenes, A

    2011-07-25

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory creates large numbers of images during each shot cycle for the analysis of optics, target inspection and target diagnostics. These images must be readily accessible once they are created and available for the 30 year lifetime of the facility. The Livermore Computing Center (LC) runs a High Performance Storage System (HPSS) that is capable of storing NIF's estimated 1 petabyte of diagnostic images at a fraction of what it would cost NIF to operate its own automated tape library. With Oracle 11g Release 2 database, it is now possible to create an application transparent, hierarchical storage system using the LC's HPSS. Using the Oracle DBMS-LOB and DBMS-DBFS-HS packages a SecureFile LOB can now be archived to storage outside of the database and accessed seamlessly through a DBFS 'link'. NIF has chosen to use this technology to implement a hierarchical store for its image based SecureFile LOBs. Using a modified external store and DBFS links, files are written to and read from a disk 'staging area' using Oracle's backup utility. Database external procedure calls invoke OS based scripts to manage a staging area and the transfer of the backup files between the staging area and the Lab's HPSS.

  2. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  3. Relativistic quantum private database queries

    NASA Astrophysics Data System (ADS)

    Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou

    2015-04-01

    Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.

  4. BGDB: a database of bivalent genes.

    PubMed

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  5. The 3XMM spectral fit database

    NASA Astrophysics Data System (ADS)

    Georgantopoulos, I.; Corral, A.; Watson, M.; Carrera, F.; Webb, N.; Rosen, S.

    2016-06-01

    I will present the XMMFITCAT database which is a spectral fit inventory of the sources in the 3XMM catalogue. Spectra are available by the XMM/SSC for all 3XMM sources which have more than 50 background subtracted counts per module. This work is funded in the framework of the ESA Prodex project. The 3XMM catalog currently covers 877 sq. degrees and contains about 400,000 unique sources. Spectra are available for over 120,000 sources. Spectral fist have been performed with various spectral models. The results are available in the web page http://xraygroup.astro.noa.gr/ and also at the University of Leicester LEDAS database webpage ledas-www.star.le.ac.uk/. The database description as well as some science results in the joint area with SDSS are presented in two recent papers: Corral et al. 2015, A&A, 576, 61 and Corral et al. 2014, A&A, 569, 71. At least for extragalactic sources, the spectral fits will acquire added value when photometric redshifts become available. In the framework of a new Prodex project we have been funded to derive photometric redshifts for the 3XMM sources using machine learning techniques. I will present the techniques as well as the optical near-IR databases that will be used.

  6. Conceptual Universal Database Language: Moving Up the Database Design Levels

    NASA Astrophysics Data System (ADS)

    Karanikolas, Nikitas N.; Vassilakopoulos, Michael Gr.

    Today, the simplicity of the relational model types affects Information Systems design. We favor another approach where the Information System designers would be able to portray directly the real world in a database model that provides more powerful and composite data types, as those of the real world. However, more powerful models, like the Frame Database Model (FDB) model, need query and manipulation languages that can handle the features of the new composite data types. We demonstrate that the adoption of such a language, the Conceptual Universal Database Language (CUDL), leads to higher database design levels: a database modeled by Entity-Relationship (ER) diagrams can be first transformed to the CUDL Abstraction Level (CAL), which can be then transformed to the FDB model. Since, the latter transformation has been previously studied, to complete the design process, we present a set of rules for the transformation from ER diagrams to CAL.

  7. The Protein Ensemble Database.

    PubMed

    Varadi, Mihaly; Tompa, Peter

    2015-01-01

    The scientific community's major conceptual notion of structural biology has recently shifted in emphasis from the classical structure-function paradigm due to the emergence of intrinsically disordered proteins (IDPs). As opposed to their folded cousins, these proteins are defined by the lack of a stable 3D fold and a high degree of inherent structural heterogeneity that is closely tied to their function. Due to their flexible nature, solution techniques such as small-angle X-ray scattering (SAXS), nuclear magnetic resonance (NMR) spectroscopy and fluorescence resonance energy transfer (FRET) are particularly well-suited for characterizing their biophysical properties. Computationally derived structural ensembles based on such experimental measurements provide models of the conformational sampling displayed by these proteins, and they may offer valuable insights into the functional consequences of inherent flexibility. The Protein Ensemble Database (http://pedb.vib.be) is the first openly accessible, manually curated online resource storing the ensemble models, protocols used during the calculation procedure, and underlying primary experimental data derived from SAXS and/or NMR measurements. By making this previously inaccessible data freely available to researchers, this novel resource is expected to promote the development of more advanced modelling methodologies, facilitate the design of standardized calculation protocols, and consequently lead to a better understanding of how function arises from the disordered state.

  8. The Mars Observer database

    NASA Technical Reports Server (NTRS)

    Albee, Arden L.

    1988-01-01

    Mars Observer will study the surface, atmosphere, and climate of Mars in a systematic way over an entire Martian year. The observations of the surface will provide a database that will be invaluable to the planning of a future Mars sample return mission. Mars Observer is planned for a September 1992 launch from the Space Shuttle, using an upper-stage. After the one year transit the spacecraft is injected into orbit about Mars and the orbit adjusted to a near-circular, sun-synchronous low-altitude, polar orbit. During the Martian year in this mapping orbit the instruments gather both geoscience data and climatological data by repetitive global mapping. The scientific objectives of the mission are to: (1) determine the global elemental and mineralogical character of the surface material; (2) define globally the topography and gravitational field; (3) establish the nature of the magnetic field; (4) determine the time and space distribution, abundance, sources, and sinks of volatile material and dust over a seasonal cycle; and (5) explore the structure and aspects of the circulation of the atmosphere. The science investigations and instruments for Mars Observer have been chosen with these objectives in mind. These instruments, the principal investigator or team leader and the objectives are discussed.

  9. Speech Databases of Typical Children and Children with SLI.

    PubMed

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children's speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children's speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children's speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing. PMID:26963508

  10. The German Landslide Database: A Tool to Analyze Infrastructure Exposure

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2015-04-01

    The Federal Republic of Germany has long been among the few European countries that lack a national landslide database. Systematic collection and inventory of landslide data over broad geographic areas and for different types of critical infrastructures was thus widely exceptional up until today. This has changed in recent years with the launch of a database initiative aimed at closing the data gap existing at national level. The present contribution reports on this database project that is focused on the development of a comprehensive pool of landslide data for systematic analysis of landslide hazard impacts in Germany. Major purpose of the database is to store and provide detailed scientific data on all types of landslides affecting critical infrastructures (transportation systems, industrial facilities, etc.) and urban areas. The database evolved over the last 15 years to a database covering large parts of Germany and offers a collection of data sets for more than 4,200 landslides with over 13,000 single data files. Data collection is based on a bottom-up approach that involves in-depth archive works and acquisition of data by close collaboration with infrastructure agencies and municipal offices. This enables to develop a database that stores geospatial landslide information and detailed data sets on landslide causes and impacts as well as hazard mitigation. The database is currently migrated to a spatial database system in PostgreSQL/PostGIS. This contribution gives an overview of the database content and its application in landslide impact research. It deals with the underlying strategy of data collection and presents the types of data and their quality to perform damage statistics and analyses of infrastructure exposure. The contribution refers to different case studies and regional investigations in the German Central Uplands.

  11. XCOM: Photon Cross Sections Database

    National Institute of Standards and Technology Data Gateway

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  12. Databases for K-8 Students

    ERIC Educational Resources Information Center

    Young, Terrence E., Jr.

    2004-01-01

    Today's elementary school students have been exposed to computers since birth, so it is not surprising that they are so proficient at using them. As a result, they are ready to search databases that include topics and information appropriate for their age level. Subscription databases are digital copies of magazines, newspapers, journals,…

  13. The Yield of Bibliographic Databases.

    ERIC Educational Resources Information Center

    Kowalski, Kazimierz; Hackett, Timothy P.

    1992-01-01

    Demonstrates a means for estimating the number of retrieved items using well-established selective dissemination of information (SDI) profiles in the SCI, INSPEC, ISMEC, CAS, and PASCAL databases. A correlation between individual database size and number of retrieved documents in technical fields is also examined. (17 references) (LAE)

  14. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  15. The Slovenian food composition database.

    PubMed

    Korošec, Mojca; Golob, Terezija; Bertoncelj, Jasna; Stibilj, Vekoslava; Seljak, Barbara Koroušić

    2013-10-01

    The preliminary Slovenian food composition database was created in 2003, through the application of the Data management and Alimenta nutritional software. In the subsequent projects, data on the composition of meat and meat products of Slovenian origin were gathered from analyses, and low-quality data of the preliminary database were discarded. The first volume of the Slovenian food composition database was published in 2006, in both electronic and paper versions. When Slovenia joined the EuroFIR NoE, the LanguaL indexing system was adopted. The Optijed nutritional software was developed, and later upgraded to the OPEN platform. This platform serves as an electronic database that currently comprises 620 foods, and as the Slovenian node in the EuroFIR virtual information platform. With the assimilation of the data on the compositions of foods of plant origin obtained within the latest project, the Slovenian database provides a good source for food compositional values of consistent and compatible quality.

  16. The EMBL nucleotide sequence database.

    PubMed Central

    Stoesser, G; Moseley, M A; Sleep, J; McGowran, M; Garcia-Pastor, M; Sterk, P

    1998-01-01

    The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl. html ) constitutes Europe's primary nucleotide sequence resource. DNA and RNA sequences are directly submitted from researchers and genome sequencing groups and collected from the scientific literature and patent applications (Fig. 1). In collaboration with DDBJ and GenBank the database is produced, maintained and distributed at the European Bioinformatics Institute. Database releases are produced quarterly and are distributed on CD-ROM. EBI's network services allow access to the most up-to-date data collection via Internet and World Wide Web interface, providing database searching and sequence similarity facilities plus access to a large number of additional databases. PMID:9399791

  17. GOTTCHA Database, Version 1

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discoverymore » rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which

  18. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  19. Rocky Mountain Basins Produced Water Database

    DOE Data Explorer

    Historical records for produced water data were collected from multiple sources, including Amoco, British Petroleum, Anadarko Petroleum Corporation, United States Geological Survey (USGS), Wyoming Oil and Gas Commission (WOGC), Denver Earth Resources Library (DERL), Bill Barrett Corporation, Stone Energy, and other operators. In addition, 86 new samples were collected during the summers of 2003 and 2004 from the following areas: Waltman-Cave Gulch, Pinedale, Tablerock and Wild Rose. Samples were tested for standard seven component "Stiff analyses", and strontium and oxygen isotopes. 16,035 analyses were winnowed to 8028 unique records for 3276 wells after a data screening process was completed. [Copied from the Readme document in the zipped file available at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the Zipped file to your PC. When opened, it will contain four versions of the database: ACCESS, EXCEL, DBF, and CSV formats. The information consists of detailed water analyses from basins in the Rocky Mountain region.

  20. Database on unstable rock slopes in Norway

    NASA Astrophysics Data System (ADS)

    Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.

    2014-05-01

    Several large rockslides have occurred in historic times in Norway causing many casualties. Most of these casualties are due to displacement waves triggered by a rock avalanche and affecting coast lines of entire lakes and fjords. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected up to now more than 230 unstable slopes with significant postglacial deformation. This systematic mapping aims to detect future rock avalanches before they occur. The registered unstable rock slopes are stored in a database on unstable rock slopes developed and maintained by the Geological Survey of Norway. The main aims of this database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, hazard and risk classification and so on. Feature classes and tables linked to the main feature class include the run-out area, the area effected by secondary effects, the hazard and risk classification, subareas and scenarios of an unstable rock slope, field observation points, displacement measurement stations, URL links for further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through the online map service on www.skrednett.no in 2014. Only publicly relevant parts of

  1. Status of the solid breeder materials database

    SciTech Connect

    Billone, M.C.; Dienst, W.; Lorenzetto, P.; Noda, K.; Roux, N.

    1995-01-01

    The databases for solid breeder ceramics (Li{sub 2}O, Li{sub 4}SiO{sub 4}, Li{sub 2}ZrO{sub 3}, and LiAlO{sub 2}) and beryllium multiplier material were critically reviewed and evaluated as part of the ITER/CDA design effort (1988-1990). The results have been documented in a detailed technical report. Emphasis was placed on the physical, thermal, mechanical, chemical stability/compatibility, tritium retention/release, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Materials properties correlations were selected for use in design analysis, and ranges for input parameters (e.g., temperature, porosity, etc.) were established. Also, areas for future research and development in blanket materials technology were highlighted and prioritized. For Li{sub 2}O, the most significant increase in the database has come in the area of tritium retention as a function of operating temperature and purge flow composition. The database for postirradiation inventory from purged in-reactor samples has increased from four points to 20 points. These new data have allowed an improvement in understanding and modeling, as well as better interpretation of the results of laboratory annealing studies on unirradiated and irradiated material. In the case of Li{sub 2}ZrO{sub 3}, relatively little data were available on the sensitivity of the mechanical properties of this ternary ceramic to microstructure and moisture content. The increase in the database for this material has allowed not only better characterization of its properties, but also optimization of fabrication parameters to improve its performance. Some additional data are also available for the other two ternary ceramics to aid in the characterization of their performance. In particular, the thermal performance of these materials, as well as beryllium, in packed-bed form has been measured and characterized.

  2. Cloudsat tropical cyclone database

    NASA Astrophysics Data System (ADS)

    Tourville, Natalie D.

    CloudSat (CS), the first 94 GHz spaceborne cloud profiling radar (CPR), launched in 2006 to study the vertical distribution of clouds. Not only are CS observations revealing inner vertical cloud details of water and ice globally but CS overpasses of tropical cyclones (TC's) are providing a new and exciting opportunity to study the vertical structure of these storm systems. CS TC observations are providing first time vertical views of TC's and demonstrate a unique way to observe TC structure remotely from space. Since December 2009, CS has intersected every globally named TC (within 1000 km of storm center) for a total of 5,278 unique overpasses of tropical systems (disturbance, tropical depression, tropical storm and hurricane/typhoon/cyclone (HTC)). In conjunction with the Naval Research Laboratory (NRL), each CS TC overpass is processed into a data file containing observational data from the afternoon constellation of satellites (A-TRAIN), Navy's Operational Global Atmospheric Prediction System Model (NOGAPS), European Center for Medium range Weather Forecasting (ECMWF) model and best track storm data. This study will describe the components and statistics of the CS TC database, present case studies of CS TC overpasses with complementary A-TRAIN observations and compare average reflectivity stratifications of TC's across different atmospheric regimes (wind shear, SST, latitude, maximum wind speed and basin). Average reflectivity stratifications reveal that characteristics in each basin vary from year to year and are dependent upon eye overpasses of HTC strength storms and ENSO phase. West Pacific (WPAC) basin storms are generally larger in size (horizontally and vertically) and have greater values of reflectivity at a predefined height than all other basins. Storm structure at higher latitudes expands horizontally. Higher vertical wind shear (≥ 9.5 m/s) reduces cloud top height (CTH) and the intensity of precipitation cores, especially in HTC strength storms

  3. The Berlin Emissivity Database

    NASA Astrophysics Data System (ADS)

    Helbert, Jorn

    Remote sensing infrared spectroscopy is the principal field of investigation for planetary surfaces composition. Past, present and future missions to the solar system bodies include in their payload instruments measuring the emerging radiation in the infrared range. TES on Mars Global Surveyor and THEMIS on Mars Odyssey have in many ways changed our views of Mars. The PFS instrument on the ESA Mars Express mission has collected spectra since the beginning of 2004. In spring 2006 the VIRTIS experiment started its operation on the ESA Venus Express mission, allowing for the first time to map the surface of Venus using the 1 µm emission from the surface. The MERTIS spectrometer is included in the payload of the ESA BepiColombo mission to Mercury, scheduled for 2013. For the interpretation of the measured data an emissivity spectral library of planetary analogue materials is needed. The Berlin Emissivity Database (BED) presented here is focused on relatively fine-grained size separates, providing a realistic basis for interpretation of thermal emission spectra of planetary regoliths. The BED is therefore complimentary to existing thermal emission libraries, like the ASU library for example. The BED contains currently entries for plagioclase and potassium feldspars, low Ca and high Ca pyroxenes, olivine, elemental sulphur, common martian analogues (JSC Mars-1, Salten Skov, palagonites, montmorillonite) and a lunar highland soil sample measured in the wavelength range from 3 to 50 µm as a function of particle size. For each sample, the spectra of four well defined particle size separates (¡25 µm , 25-63 µm, 63-125 µm, 125-250 µm) are measured with a 4 cm-1 spectral resolution. These size separates have been selected as typical representations for most of the planetary surfaces. Following an ongoing upgrade of the Planetary Emmissivity Laboratory (PEL) at DLR in Berlin measurements can be obtained at temperatures up to 500° C - realistic for the dayside conditions

  4. A relational database for the monitoring and analysis of watershed hydrologic functions: I. Database design and pertinent queries

    NASA Astrophysics Data System (ADS)

    Carleton, Christian J.; Dahlgren, Randy A.; Tate, Kenneth W.

    2005-05-01

    The need to monitor water quantity and quality has increased dramatically in recent years due to total maximum daily load requirements that address non-point source pollutants in our nation's water bodies. This has resulted in the need for data management techniques and tools to manage the vast amount of new hydrologic data being collected. Data must be stored, checked for errors, manipulated, retrieved for analysis, and shared within the hydrologic community. The Watershed Monitoring and Analysis Database is a relational database application developed as a data management tool to efficiently and accurately address the needs of individuals and groups responsible for maintaining hydrologic data sets. Stream flow, water quality, and meteorological data can be stored and manipulated within the database. Both remedial and advanced tasks can be simplified with the help of the user interface application, such as quality assurance/quality control (QA/QC) calculations, application of correction and conversion factors, retrieval of desired data for advanced analysis, and data comparisons among multiple study sites. Web integration and local area network (LAN) database synchronization can be supported depending upon the database engine used. The objectives of this paper are to: (1) present in detail the database architecture, including table structures and overall database design, and (2) provide useful queries to retrieve data that involve calculations, comparisons, and basic QA/QC protocols. Developed using Microsoft Access, the concepts and strategies covered in this paper may be applied to any commercially available relational database.

  5. Population databases in development analysis.

    PubMed

    Chamie, J

    1994-01-01

    Population databases are very important in formulating analyses of social and economic change and development. Since such analyses are often the basis for policy making and program formulation, it is important to have a sound understanding of their strengths and limitations. This paper focuses upon databases which deal with population size, life expectancy at birth, and infant mortality. Considerable progress has been made in producing population databases over the last several decades, but many problems remain with regard to their comparability, completeness of coverage, and accuracy. Governmental and political circumstances greatly influence the availability and quality of population databases. Globally, the comparability of data remains a serious concern due to deviations from standard definitions. The completeness of coverage of databases among less developed countries varies widely by region, while the data for preparing estimates and assessing demographic trends are deficient and problematic. Technological advances and the repackaging of population databases have greatly advanced their production and availability, but confusion and ignorance have become widespread regarding the original source and nature of the data. Database users therefore too often undertake faulty analyses which lead to false conclusions.

  6. Database of Properties of Meteors

    NASA Technical Reports Server (NTRS)

    Suggs, Rob; Anthea, Coster

    2006-01-01

    A database of properties of meteors, and software that provides access to the database, are being developed as a contribution to continuing efforts to model the characteristics of meteors with increasing accuracy. Such modeling is necessary for evaluation of the risk of penetration of spacecraft by meteors. For each meteor in the database, the record will include an identification, date and time, radiant properties, ballistic coefficient, radar cross section, size, density, and orbital elements. The property of primary interest in the present case is density, and one of the primary goals in this case is to derive densities of meteors from their atmospheric decelerations. The database and software are expected to be valid anywhere in the solar system. The database will incorporate new data plus results of meteoroid analyses that, heretofore, have not been readily available to the aerospace community. Taken together, the database and software constitute a model that is expected to provide improved estimates of densities and to result in improved risk analyses for interplanetary spacecraft. It is planned to distribute the database and software on a compact disk.

  7. The YH database: the first Asian diploid genome database.

    PubMed

    Li, Guoqing; Ma, Lijia; Song, Chao; Yang, Zhentao; Wang, Xiulan; Huang, Hui; Li, Yingrui; Li, Ruiqiang; Zhang, Xiuqing; Yang, Huanming; Wang, Jian; Wang, Jun

    2009-01-01

    The YH database is a server that allows the user to easily browse and download data from the first Asian diploid genome. The aim of this platform is to facilitate the study of this Asian genome and to enable improved organization and presentation large-scale personal genome data. Powered by GBrowse, we illustrate here the genome sequences, SNPs, and sequencing reads in the MapView. The relationships between phenotype and genotype can be searched by location, dbSNP ID, HGMD ID, gene symbol and disease name. A BLAST web service is also provided for the purpose of aligning query sequence against YH genome consensus. The YH database is currently one of the three personal genome database, organizing the original data and analysis results in a user-friendly interface, which is an endeavor to achieve fundamental goals for establishing personal medicine. The database is available at http://yh.genomics.org.cn.

  8. Databases of the marine metagenomics.

    PubMed

    Mineta, Katsuhiko; Gojobori, Takashi

    2016-02-01

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  9. NUCLEAR DATABASES FOR REACTOR APPLICATIONS.

    SciTech Connect

    PRITYCHENKO, B.; ARCILLA, R.; BURROWS, T.; HERMAN, M.W.; MUGHABGHAB, S.; OBLOZINSKY, P.; ROCHMAN, D.; SONZOGNI, A.A.; TULI, J.; WINCHELL, D.F.

    2006-06-05

    The National Nuclear Data Center (NNDC): An overview of nuclear databases, related products, nuclear data Web services and publications. The NNDC collects, evaluates, and disseminates nuclear physics data for basic research and applied nuclear technologies. The NNDC maintains and contributes to the nuclear reaction (ENDF, CSISRS) and nuclear structure databases along with several others databases (CapGam, MIRD, IRDF-2002) and provides coordination for the Cross Section Evaluation Working Group (CSEWG) and the US Nuclear Data Program (USNDP). The Center produces several publications and codes such as Atlas of Neutron Resonances, Nuclear Wallet Cards booklets and develops codes, such as nuclear reaction model code Empire.

  10. Biological Databases for Human Research

    PubMed Central

    Zou, Dong; Ma, Lina; Yu, Jun; Zhang, Zhang

    2015-01-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation. PMID:25712261

  11. Allergen databases and allergen semantics.

    PubMed

    Gendel, Steven M

    2009-08-01

    The efficacy of any specific bioinformatic analysis of the potential allergenicity of new food proteins depends directly on the nature and content of the databases that are used in the analysis. A number of different allergen-related databases have been developed, each designed to meet a different need. These databases differ in content, organization, and accessibility. These differences create barriers for users and prevent data sharing and integration. The development and application of appropriate semantic web technologies, (for example, a food allergen ontology) could help to overcome these barriers and promote the development of more advanced analytic capabilities.

  12. Biological databases for human research.

    PubMed

    Zou, Dong; Ma, Lina; Yu, Jun; Zhang, Zhang

    2015-02-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation. PMID:25712261

  13. International forensic automotive paint database

    NASA Astrophysics Data System (ADS)

    Bishea, Gregory A.; Buckle, Joe L.; Ryland, Scott G.

    1999-02-01

    The Technical Working Group for Materials Analysis (TWGMAT) is supporting an international forensic automotive paint database. The Federal Bureau of Investigation and the Royal Canadian Mounted Police (RCMP) are collaborating on this effort through TWGMAT. This paper outlines the support and further development of the RCMP's Automotive Paint Database, `Paint Data Query'. This cooperative agreement augments and supports a current, validated, searchable, automotive paint database that is used to identify make(s), model(s), and year(s) of questioned paint samples in hit-and-run fatalities and other associated investigations involving automotive paint.

  14. The Automatic Library Tracking Database

    SciTech Connect

    Fahey, Mark R; Jones, Nicholas A; Hadri, Bilel

    2010-01-01

    A library tracking database has been developed and put into production at the National Institute for Computational Sciences and the Oak Ridge Leadership Computing Facility (both located at Oak Ridge National Laboratory.) The purpose of the library tracking database is to track which libraries are used at link time on Cray XT5 Supercomputers. The database stores the libraries used at link time and also records the executables run in a batch job. With this data, many operationally important questions can be answered such as which libraries are most frequently used and which users are using deprecated libraries or applications. The infrastructure design and reporting mechanisms are presented along with collected production data.

  15. Database of recent tsunami deposits

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce E.

    2010-01-01

    This report describes a database of sedimentary characteristics of tsunami deposits derived from published accounts of tsunami deposit investigations conducted shortly after the occurrence of a tsunami. The database contains 228 entries, each entry containing data from up to 71 categories. It includes data from 51 publications covering 15 tsunamis distributed between 16 countries. The database encompasses a wide range of depositional settings including tropical islands, beaches, coastal plains, river banks, agricultural fields, and urban environments. It includes data from both local tsunamis and teletsunamis. The data are valuable for interpreting prehistorical, historical, and modern tsunami deposits, and for the development of criteria to identify tsunami deposits in the geologic record.

  16. New geothermal database for Utah

    USGS Publications Warehouse

    Blackett, Robert E.; ,

    1993-01-01

    The Utah Geological Survey complied a preliminary database consisting of over 800 records on thermal wells and springs in Utah with temperatures of 20??C or greater. Each record consists of 35 fields, including location of the well or spring, temperature, depth, flow-rate, and chemical analyses of water samples. Developed for applications on personal computers, the database will be useful for geochemical, statistical, and other geothermal related studies. A preliminary map of thermal wells and springs in Utah, which accompanies the database, could eventually incorporate heat-flow information, bottom-hole temperatures from oil and gas wells, traces of Quaternary faults, and locations of young volcanic centers.

  17. The Molecular Biology Database Collection: 2008 update

    PubMed Central

    Galperin, Michael Y.

    2008-01-01

    The Nucleic Acids Research online Molecular Biology Database Collection is a public repository that lists more than 1000 databases described in this and previous Nucleic Acids Research annual database issues, as well as a selection of molecular biology databases described in other journals. All databases included in this Collection are freely available to the public. The 2008 update includes 1078 databases, 110 more than the previous one. The links to more than 80 databases have been updated and 25 obsolete databases have been removed from the list. The complete database list and summaries are available online at the Nucleic Acids Research web site, http://nar.oxfordjournals.org/. PMID:18025043

  18. Lectindb: a plant lectin database.

    PubMed

    Chandra, Nagasuma R; Kumar, Nirmal; Jeyakani, Justin; Singh, Desh Deepak; Gowda, Sharan B; Prathima, M N

    2006-10-01

    Lectins, a class of carbohydrate-binding proteins, are now widely recognized to play a range of crucial roles in many cell-cell recognition events triggering several important cellular processes. They encompass different members that are diverse in their sequences, structures, binding site architectures, quaternary structures, carbohydrate affinities, and specificities as well as their larger biological roles and potential applications. It is not surprising, therefore, that the vast amount of experimental data on lectins available in the literature is so diverse, that it becomes difficult and time consuming, if not impossible to comprehend the advances in various areas and obtain the maximum benefit. To achieve an effective use of all the data toward understanding the function and their possible applications, an organization of these seemingly independent data into a common framework is essential. An integrated knowledge base ( Lectindb, http://nscdb.bic.physics.iisc.ernet.in ) together with appropriate analytical tools has therefore been developed initially for plant lectins by collating and integrating diverse data. The database has been implemented using MySQL on a Linux platform and web-enabled using PERL-CGI and Java tools. Data for each lectin pertain to taxonomic, biochemical, domain architecture, molecular sequence, and structural details as well as carbohydrate and hence blood group specificities. Extensive links have also been provided for relevant bioinformatics resources and analytical tools. Availability of diverse data integrated into a common framework is expected to be of high value not only for basic studies in lectin biology but also for basic studies in pursuing several applications in biotechnology, immunology, and clinical practice, using these molecules.

  19. InterAction Database (IADB)

    Cancer.gov

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  20. Marine and Hydrokinetic Technology Database

    DOE Data Explorer

    DOE’s Marine and Hydrokinetic Technology Database provides up-to-date information on marine and hydrokinetic renewable energy, both in the U.S. and around the world. The database includes wave, tidal, current, and ocean thermal energy, and contains information on the various energy conversion technologies, companies active in the field, and development of projects in the water. Depending on the needs of the user, the database can present a snapshot of projects in a given region, assess the progress of a certain technology type, or provide a comprehensive view of the entire marine and hydrokinetic energy industry. Results are displayed as a list of technologies, companies, or projects. Data can be filtered by a number of criteria, including country/region, technology type, generation capacity, and technology or project stage. The database was updated in 2009 to include ocean thermal energy technologies, companies, and projects.

  1. SUPERSITES INTEGRATED RELATIONAL DATABASE (SIRD)

    EPA Science Inventory

    As part of EPA's Particulate Matter (PM) Supersites Program (Program), the University of Maryland designed and developed the Supersites Integrated Relational Database (SIRD). Measurement data in SIRD include comprehensive air quality data from the 7 Supersite program locations f...

  2. Fun Databases: My Top Ten.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1992-01-01

    Provides reviews of 10 online databases: Consumer Reports; Public Opinion Online; Encyclopedia of Associations; Official Airline Guide Adventure Atlas and Events Calendar; CENDATA; Hollywood Hotline; Fearless Taster; Soap Opera Summaries; and Human Sexuality. (LRW)

  3. LDEF meteoroid and debris database

    NASA Technical Reports Server (NTRS)

    Dardano, C. B.; See, Thomas H.; Zolensky, Michael E.

    1994-01-01

    The Long Duration Exposure Facility (LDEF) Meteoroid and Debris Special Investigation Group (M&D SIG) database is maintained at the Johnson Space Center (JSC), Houston, Texas, and consists of five data tables containing information about individual features, digitized images of selected features, and LDEF hardware (i.e., approximately 950 samples) archived at JSC. About 4000 penetrations (greater than 300 micron in diameter) and craters (greater than 500 micron in diameter) were identified and photodocumented during the disassembly of LDEF at the Kennedy Space Center (KSC), while an additional 4500 or so have subsequently been characterized at JSC. The database also contains some data that have been submitted by various PI's, yet the amount of such data is extremely limited in its extent, and investigators are encouraged to submit any and all M&D-type data to JSC for inclusion within the M&D database. Digitized stereo-image pairs are available for approximately 4500 features through the database.

  4. The NRSub database: update 1997.

    PubMed

    Perrière, G; Moszer, I; Gojobori, T

    1997-01-01

    In the context of the international project aiming at sequencing the whole genome of Bacillus subtilis we have developed NRSub, a non-redundant database of sequences from this organism. Starting from the B.subtilis sequences available in the repository collections we have removed all encountered duplications, then we have added extra annotations to the sequences (e.g. accession numbers for the genes, locations on the genetic map, codon usage index). We have also added cross-references with EMBL/GenBank/DDBJ, MEDLINE, SWISS-PROT and ENZYME databases. NRSub is distributed through anonymous FTP as a text file in EMBL format and as an ACNUC database. It is also possible to access the database through two dedicated World Wide Web servers located in France (http://acnuc.univ-lyon1.fr/nrsub/nrsub.++ +html ) and in Japan (http://ddbjs4h.genes.nig.ac.jp/ ). PMID:9016504

  5. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  6. Small Business Innovations (Integrated Database)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  7. Exploiting relational database technology in a GIS

    NASA Astrophysics Data System (ADS)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  8. Astronomical Surveys, Catalogs, Databases, and Archives

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.

    2016-06-01

    All-sky and large-area astronomical surveys and their cataloged data over the whole range of electromagnetic spectrum are reviewed, from γ-ray to radio, such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and II based catalogues (APM, MAPS, USNO, GSC) in optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio and many others, as well as most important surveys giving optical images (DSS I and II, SDSS, etc.), proper motions (Tycho, USNO, Gaia), variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS) and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA). Most important astronomical databases and archives are reviewed as well, including Wide-Field Plate DataBase (WFPDB), ESO, HEASARC, IRSA and MAST archives, CDS SIMBAD, VizieR and Aladin, NED and HyperLEDA extragalactic databases, ADS and astro-ph services. They are powerful sources for many-sided efficient research using Virtual Observatory tools. Using and analysis of Big Data accumulated in astronomy lead to many new discoveries.

  9. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  10. The new international GLE database

    NASA Astrophysics Data System (ADS)

    Duldig, M. L.; Watts, D. J.

    2001-08-01

    The Australian Antarctic Division has agreed to host the international GLE database. Access to the database is via a world-wide-web interface and initially covers all GLEs since the start of the 22nd solar cycle. Access restriction for recent events is controlled by password protection and these data are available only to those groups contributing data to the database. The restrictions to data will be automatically removed for events older than 2 years, in accordance with the data exchange provisions of the Antarctic Treaty. Use of the data requires acknowledgment of the database as the source of the data and acknowledgment of the specific groups that provided the data used. Furthermore, some groups that provide data to the database have specific acknowledgment requirements or wording. A new submission format has been developed that will allow easier exchange of data, although the old format will be acceptable for some time. Data download options include direct web based download and email. Data may also be viewed as listings or plots with web browsers. Search options have also been incorporated. Development of the database will be ongoing with extension to viewing and delivery options, addition of earlier data and the development of mirror sites. It is expected that two mirror sites, one in North America and one in Europe, will be developed to enable fast access for the whole cosmic ray community.

  11. Rice Glycosyltransferase (GT) Phylogenomic Database

    DOE Data Explorer

    Ronald, Pamela

    The Ronald Laboratory staff at the University of California-Davis has a primary research focus on the genes of the rice plant. They study the role that genetics plays in the way rice plants respond to their environment. They created the Rice GT Database in order to integrate functional genomic information for putative rice Glycosyltransferases (GTs). This database contains information on nearly 800 putative rice GTs (gene models) identified by sequence similarity searches based on the Carbohydrate Active enZymes (CAZy) database. The Rice GT Database provides a platform to display user-selected functional genomic data on a phylogenetic tree. This includes sequence information, mutant line information, expression data, etc. An interactive chromosomal map shows the position of all rice GTs, and links to rice annotation databases are included. The format is intended to "facilitate the comparison of closely related GTs within different families, as well as perform global comparisons between sets of related families." [From http://ricephylogenomics.ucdavis.edu/cellwalls/gt/genInfo.shtml] See also the primary paper discussing this work: Peijian Cao, Laura E. Bartley, Ki-Hong Jung and Pamela C. Ronalda. Construction of a Rice Glycosyltransferase Phylogenomic Database and Identification of Rice-Diverged Glycosyltransferases. Molecular Plant, 2008, 1(5): 858-877.

  12. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  13. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, Elisabeth; Adler, Silke; Ungersböck, Markus; Zach-Hermann, Susanne

    2010-05-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". The Societas Meteorologicae Palatinae at Mannheim well known for its first European wide meteorological network also established a phenological network which was active from 1781 to 1792. Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies, as one has to address many National Observations Programs (NOP) to get access to the data before one can start to bring them in a uniform style. From 2004 to 2005 the COST-action 725 was running with the main objective to establish a European reference data set of phenological observations that can be used for climatological purposes, especially climate monitoring, and detection of changes. So far the common database/reference data set of COST725 comprises 7687248 data from 7285 observation sites in 15 countries and International Phenological Gardens (IPG) spanning the timeframe from 1951 to 2000. ZAMG is hosting the database. In January 2010 PEP725 has started and will take over not only the part of maintaining, updating the database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of

  14. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  15. Flybrain neuron database: a comprehensive database system of the Drosophila brain neurons.

    PubMed

    Shinomiya, Kazunori; Matsuda, Keiji; Oishi, Takao; Otsuna, Hideo; Ito, Kei

    2011-04-01

    The long history of neuroscience has accumulated information about numerous types of neurons in the brain of various organisms. Because such neurons have been reported in diverse publications without controlled format, it is not easy to keep track of all the known neurons in a particular nervous system. To address this issue we constructed an online database called Flybrain Neuron Database (Flybrain NDB), which serves as a platform to collect and provide information about all the types of neurons published so far in the brain of Drosophila melanogaster. Projection patterns of the identified neurons in diverse areas of the brain were recorded in a unified format, with text-based descriptions as well as images and movies wherever possible. In some cases projection sites and the distribution of the post- and presynaptic sites were determined with greater detail than described in the original publication. Information about the labeling patterns of various antibodies and expression driver strains to visualize identified neurons are provided as a separate sub-database. We also implemented a novel visualization tool with which users can interactively examine three-dimensional reconstruction of the confocal serial section images with desired viewing angles and cross sections. Comprehensive collection and versatile search function of the anatomical information reported in diverse publications make it possible to analyze possible connectivity between different brain regions. We analyzed the preferential connectivity among optic lobe layers and the plausible olfactory sensory map in the lateral horn to show the usefulness of such a database.

  16. A database of macromolecular motions.

    PubMed Central

    Gerstein, M; Krebs, W

    1998-01-01

    We describe a database of macromolecular motions meant to be of general use to the structural community. The database, which is accessible on the World Wide Web with an entry point at http://bioinfo.mbb.yale.edu/MolMovDB , attempts to systematize all instances of protein and nucleic acid movement for which there is at least some structural information. At present it contains >120 motions, most of which are of proteins. Protein motions are further classified hierarchically into a limited number of categories, first on the basis of size (distinguishing between fragment, domain and subunit motions) and then on the basis of packing. Our packing classification divides motions into various categories (shear, hinge, other) depending on whether or not they involve sliding over a continuously maintained and tightly packed interface. In addition, the database provides some indication about the evidence behind each motion (i.e. the type of experimental information or whether the motion is inferred based on structural similarity) and attempts to describe many aspects of a motion in terms of a standardized nomenclature (e.g. the maximum rotation, the residue selection of a fixed core, etc.). Currently, we use a standard relational design to implement the database. However, the complexity and heterogeneity of the information kept in the database makes it an ideal application for an object-relational approach, and we are moving it in this direction. Specifically, in terms of storing complex information, the database contains plausible representations for motion pathways, derived from restrained 3D interpolation between known endpoint conformations. These pathways can be viewed in a variety of movie formats, and the database is associated with a server that can automatically generate these movies from submitted coordinates. PMID:9722650

  17. Human genotype-phenotype databases: aims, challenges and opportunities.

    PubMed

    Brookes, Anthony J; Robinson, Peter N

    2015-12-01

    Genotype-phenotype databases provide information about genetic variation, its consequences and its mechanisms of action for research and health care purposes. Existing databases vary greatly in type, areas of focus and modes of operation. Despite ever larger and more intricate datasets--made possible by advances in DNA sequencing, omics methods and phenotyping technologies--steady progress is being made towards integrating these databases rather than using them as separate entities. The consequential shift in focus from single-gene variants towards large gene panels, exomes, whole genomes and myriad observable characteristics creates new challenges and opportunities in database design, interpretation of variant pathogenicity and modes of data representation and use. PMID:26553330

  18. Historical hydrology and database on flood events (Apulia, southern Italy)

    NASA Astrophysics Data System (ADS)

    Lonigro, Teresa; Basso, Alessia; Gentile, Francesco; Polemio, Maurizio

    2014-05-01

    Historical data about floods represent an important tool for the comprehension of the hydrological processes, the estimation of hazard scenarios as a basis for Civil Protection purposes, as a basis of the rational land use management, especially in karstic areas, where time series of river flows are not available and the river drainage is rare. The research shows the importance of the improvement of existing flood database with an historical approach, finalized to collect past or historical floods event, in order to better assess the occurrence trend of floods, in the case for the Apulian region (south Italy). The main source of records of flood events for Apulia was the AVI (the acronym means Italian damaged areas) database, an existing Italian database that collects data concerning damaging floods from 1918 to 1996. The database was expanded consulting newspapers, publications, and technical reports from 1996 to 2006. In order to expand the temporal range further data were collected searching in the archives of regional libraries. About 700 useful news from 17 different local newspapers were found from 1876 to 1951. From a critical analysis of the 700 news collected since 1876 to 1952 only 437 were useful for the implementation of the Apulia database. The screening of these news showed the occurrence of about 122 flood events in the entire region. The district of Bari, the regional main town, represents the area in which the great number of events occurred; the historical analysis confirms this area as flood-prone. There is an overlapping period (from 1918 to 1952) between old AVI database and new historical dataset obtained by newspapers. With regard to this period, the historical research has highlighted new flood events not reported in the existing AVI database and it also allowed to add more details to the events already recorded. This study shows that the database is a dynamic instrument, which allows a continuous implementation of data, even in real time

  19. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  20. Reference ballistic imaging database performance.

    PubMed

    De Kinder, Jan; Tulleners, Frederic; Thiebaut, Hugues

    2004-03-10

    Ballistic imaging databases allow law enforcement to link recovered cartridge cases to other crime scenes and to firearms. The success of these databases has led many to propose that all firearms in circulation be entered into a reference ballistic image database (RBID). To assess the performance of an RBID, we fired 4200 cartridge cases from 600 9mm Para Sig Sauer model P226 series pistols. Each pistol fired two Remington cartridges, one of which was imaged in the RBID, and five additional cartridges, consisting of Federal, Speer, Winchester, Wolf, and CCI brands. Randomly selected samples from the second series of Remington cartridge cases and from the five additional brands were then correlated against the RBID. Of the 32 cartridges of the same make correlated against the RBID, 72% ranked in the top 10 positions. Likewise, of the 160 cartridges of the five different brands correlated against the database, 21% ranked in the top 10 positions. Generally, the ranking position increased as the size of the RBID increased. We obtained similar results when we expanded the RBID to include firearms with the same class characteristics for breech face marks, firing pin impressions, and extractor marks. The results of our six queries against the RBID indicate that a reference ballistics image database of new guns is currently fraught with too many difficulties to be an effective and efficient law enforcement tool.

  1. REDIdb: the RNA editing database.

    PubMed

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  2. YCRD: Yeast Combinatorial Regulation Database

    PubMed Central

    Wu, Wei-Sheng; Hsieh, Yen-Chen; Lai, Fu-Jou

    2016-01-01

    In eukaryotes, the precise transcriptional control of gene expression is typically achieved through combinatorial regulation using cooperative transcription factors (TFs). Therefore, a database which provides regulatory associations between cooperative TFs and their target genes is helpful for biologists to study the molecular mechanisms of transcriptional regulation of gene expression. Because there is no such kind of databases in the public domain, this prompts us to construct a database, called Yeast Combinatorial Regulation Database (YCRD), which deposits 434,197 regulatory associations between 2535 cooperative TF pairs and 6243 genes. The comprehensive collection of more than 2500 cooperative TF pairs was retrieved from 17 existing algorithms in the literature. The target genes of a cooperative TF pair (e.g. TF1-TF2) are defined as the common target genes of TF1 and TF2, where a TF’s experimentally validated target genes were downloaded from YEASTRACT database. In YCRD, users can (i) search the target genes of a cooperative TF pair of interest, (ii) search the cooperative TF pairs which regulate a gene of interest and (iii) identify important cooperative TF pairs which regulate a given set of genes. We believe that YCRD will be a valuable resource for yeast biologists to study combinatorial regulation of gene expression. YCRD is available at http://cosbi.ee.ncku.edu.tw/YCRD/ or http://cosbi2.ee.ncku.edu.tw/YCRD/. PMID:27392072

  3. YCRD: Yeast Combinatorial Regulation Database.

    PubMed

    Wu, Wei-Sheng; Hsieh, Yen-Chen; Lai, Fu-Jou

    2016-01-01

    In eukaryotes, the precise transcriptional control of gene expression is typically achieved through combinatorial regulation using cooperative transcription factors (TFs). Therefore, a database which provides regulatory associations between cooperative TFs and their target genes is helpful for biologists to study the molecular mechanisms of transcriptional regulation of gene expression. Because there is no such kind of databases in the public domain, this prompts us to construct a database, called Yeast Combinatorial Regulation Database (YCRD), which deposits 434,197 regulatory associations between 2535 cooperative TF pairs and 6243 genes. The comprehensive collection of more than 2500 cooperative TF pairs was retrieved from 17 existing algorithms in the literature. The target genes of a cooperative TF pair (e.g. TF1-TF2) are defined as the common target genes of TF1 and TF2, where a TF's experimentally validated target genes were downloaded from YEASTRACT database. In YCRD, users can (i) search the target genes of a cooperative TF pair of interest, (ii) search the cooperative TF pairs which regulate a gene of interest and (iii) identify important cooperative TF pairs which regulate a given set of genes. We believe that YCRD will be a valuable resource for yeast biologists to study combinatorial regulation of gene expression. YCRD is available at http://cosbi.ee.ncku.edu.tw/YCRD/ or http://cosbi2.ee.ncku.edu.tw/YCRD/. PMID:27392072

  4. Searching NCBI databases using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-06-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  5. Searching NCBI databases using Entrez.

    PubMed

    Baxevanis, Andreas D

    2008-12-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two Basic Protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An Alternate Protocol builds upon the first Basic Protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The Support Protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  6. Stratospheric emissions effects database development

    NASA Technical Reports Server (NTRS)

    Baughcum, Steven L.; Henderson, Stephen C.; Hertel, Peter S.; Maggiora, Debra R.; Oncina, Carlos A.

    1994-01-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  7. National Residential Efficiency Measures Database

    DOE Data Explorer

    The National Residential Efficiency Measures Database is a publicly available, centralized resource of residential building retrofit measures and costs for the U.S. building industry. With support from the U.S. Department of Energy, NREL developed this tool to help users determine the most cost-effective retrofit measures for improving energy efficiency of existing homes. Software developers who require residential retrofit performance and cost data for applications that evaluate residential efficiency measures are the primary audience for this database. In addition, home performance contractors and manufacturers of residential materials and equipment may find this information useful. The database offers the following types of retrofit measures: 1) Appliances, 2) Domestic Hot Water, 3) Enclosure, 4) Heating, Ventilating, and Air Conditioning (HVAC), 5) Lighting, 6) Miscellaneous.

  8. DOE Global Energy Storage Database

    DOE Data Explorer

    The DOE International Energy Storage Database has more than 400 documented energy storage projects from 34 countries around the world. The database provides free, up-to-date information on grid-connected energy storage projects and relevant state and federal policies. More than 50 energy storage technologies are represented worldwide, including multiple battery technologies, compressed air energy storage, flywheels, gravel energy storage, hydrogen energy storage, pumped hydroelectric, superconducting magnetic energy storage, and thermal energy storage. The policy section of the database shows 18 federal and state policies addressing grid-connected energy storage, from rules and regulations to tariffs and other financial incentives. It is funded through DOE’s Sandia National Laboratories, and has been operating since January 2012.

  9. Stratospheric emissions effects database development

    SciTech Connect

    Baughcum, S.L.; Henderson, S.C.; Hertel, P.S.; Maggiora, D.R.; Oncina, C.A.

    1994-07-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  10. The Giardia genome project database.

    PubMed

    McArthur, A G; Morrison, H G; Nixon, J E; Passamaneck, N Q; Kim, U; Hinkle, G; Crocker, M K; Holder, M E; Farr, R; Reich, C I; Olsen, G E; Aley, S B; Adam, R D; Gillin, F D; Sogin, M L

    2000-08-15

    The Giardia genome project database provides an online resource for Giardia lamblia (WB strain, clone C6) genome sequence information. The database includes edited single-pass reads, the results of BLASTX searches, and details of progress towards sequencing the entire 12 million-bp Giardia genome. Pre-sorted BLASTX results can be retrieved based on keyword searches and BLAST searches of the high throughput Giardia data can be initiated from the web site or through NCBI. Descriptions of the genomic DNA libraries, project protocols and summary statistics are also available. Although the Giardia genome project is ongoing, new sequences are made available on a bi-monthly basis to ensure that researchers have access to information that may assist them in the search for genes and their biological function. The current URL of the Giardia genome project database is www.mbl.edu/Giardia.

  11. Polymorphix: a sequence polymorphism database.

    PubMed

    Bazin, Eric; Duret, Laurent; Penel, Simon; Galtier, Nicolas

    2005-01-01

    Within-species sequence variation data are of special interest since they contain information about recent population/species history, and the molecular evolutionary forces currently in action in natural populations. These data, however, are presently dispersed within generalist databases, and are difficult to access. To solve this problem, we have developed Polymorphix, a database dedicated to sequence polymorphism. It contains within-species homologous sequence families built using EMBL/GenBank under suitable similarity and bibliographic criteria. Polymorphix is an ACNUC structured database allowing both simple and complex queries for population genomic studies. Alignments within families as well as phylogenetic trees can be download. When available, outgroups are included in the alignment. Polymorphix contains sequences from the nuclear, mitochondrial and chloroplastic genomes of every eukaryote species represented in EMBL. It can be accessed by a web interface (http://pbil.univ-lyon1.fr/polymorphix/query.php).

  12. The new IAGOS Database Portal

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain

    2016-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.

  13. EPA U.S. NATIONAL MARKAL DATABASE: DATABASE DOCUMENTATION

    EPA Science Inventory

    This document describes in detail the U.S. Energy System database developed by EPA's Integrated Strategic Assessment Work Group for use with the MARKAL model. The group is part of the Office of Research and Development and is located in the National Risk Management Research Labor...

  14. A Sandia telephone database system

    SciTech Connect

    Nelson, S.D.; Tolendino, L.F.

    1991-08-01

    Sandia National Laboratories, Albuquerque, may soon have more responsibility for the operation of its own telephone system. The processes that constitute providing telephone service can all be improved through the use of a central data information system. We studied these processes, determined the requirements for a database system, then designed the first stages of a system that meets our needs for work order handling, trouble reporting, and ISDN hardware assignments. The design was based on an extensive set of applications that have been used for five years to manage the Sandia secure data network. The system utilizes an Ingres database management system and is programmed using the Application-By-Forms tools.

  15. A comparison of biomedical databases.

    PubMed Central

    Mychko-Megrin, A Y

    1991-01-01

    Various published bibliographic and abstract services covering the period 1970-1988 were compared to analyze scope and coverage. A total of 7,281 articles and book titles (1,655 Soviet and 5,626 foreign) were selected on forty-one topics in different medical fields. The titles originated from three different samples but included all Soviet medical literature on the subjects. A distribution of biomedical serials from five databases is given by country, and twelve indices to assess the quality of biomedical databases are suggested. PMID:1884085

  16. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Masuyama, Keiichi

    CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.

  17. Coal quality databases: Practical applications

    SciTech Connect

    Finkelman, R.B.; Gross, P.M.K.

    1999-07-01

    Domestic and worldwide coal use will be influenced by concerns about the effects of coal combustion on the local, regional and global environment. Reliable coal quality data can help decision-makers to better assess risks and determine impacts of coal constituents on technological behavior, economic byproduct recovery, and environmental and human health issues. The US Geological Survey (USGS) maintains an existing coal quality database (COALQUAL) that contains analyses of approximately 14,000 col samples from every major coal-producing basin in the US. For each sample, the database contains results of proximate and ultimate analyses; sulfur form data; and major, minor, and trace element concentrations for approximately 70 elements

  18. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  19. Federal Register Document Image Database, Volume 1

    National Institute of Standards and Technology Data Gateway

    NIST Federal Register Document Image Database, Volume 1 (PC database for purchase)   NIST has produced a new document image database for evaluating document analysis and recognition technologies and information retrieval systems. NIST Special Database 25 contains page images from the 1994 Federal Register and much more.

  20. European Community Databases: Online to Europe.

    ERIC Educational Resources Information Center

    Hensley, Colin

    1989-01-01

    Describes three groups of databases sponsored by the European Communities Commission: Eurobases, a textual database of the contents of the "Official Journal" of the European Community; the European Community Host Organization (ECHO) databases, which offer multilingual information about Europe; and statistical databases. Information on access and…

  1. WMC Database Evaluation. Case Study Report

    SciTech Connect

    Palounek, Andrea P. T

    2015-10-29

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  2. NLTE4 Plasma Population Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase)   This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.

  3. KID, a Kinase Inhibitor Database project.

    PubMed

    Collin, O; Meijer, L

    1999-01-01

    The Kinase Inhibitor Database is a small specialized database dedicated to the gathering of information on protein kinase inhibitors. The database is accessible through the World Wide Web system and gives access to structural and bibliographic information on protein kinase inhibitors. The data in the database will be collected and submitted by researchers working in the kinase inhibitor field. The submitted data will be checked by the curator of the database before entry.

  4. The NASA Fireball Network Database

    NASA Technical Reports Server (NTRS)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  5. LDEF meteoroid and debris database

    NASA Astrophysics Data System (ADS)

    Dardano, C. B.; See, Thomas H.; Zolensky, Michael E.

    The Long Duration Exposure Facility (LDEF) Meteoroid and Debris Special Investigation Group (M&D SIG) database is maintained at the Johnson Space Center (JSC), Houston, Texas, and consists of five data tables containing information about individual features, digitized images of selected features, and LDEF hardware (i.e., approximately 950 samples) archived at JSC. About 4000 penetrations (greater than 300 micron in diameter) and craters (greater than 500 micron in diameter) were identified and photo-documented during the disassembly of LDEF at the Kennedy Space Center (KSC), while an additional 4500 or so have subsequently been characterized at JSC. The database also contains some data that have been submitted by various PI's, yet the amount of such data is extremely limited in its extent, and investigators are encouraged to submit any and all M&D-type data to JSC for inclusion within the M&D database. Digitized stereo-image pairs are available for approximately 4500 features through the database.

  6. Pathway Interaction Database (PID) —

    Cancer.gov

    The National Cancer Institute (NCI) in collaboration with Nature Publishing Group has established the Pathway Interaction Database (PID) in order to provide a highly structured, curated collection of information about known biomolecular interactions and key cellular processes assembled into signaling pathways.

  7. Databases in the United Kingdom.

    ERIC Educational Resources Information Center

    Chadwyck-Healey, Charles

    This overview of the status of online databases in the United Kingdom describes online users' attitudes and practices in light of two surveys conducted in the past two years. The Online Information Centre at ASLIB sampled 325 users, and Chadwyck-Healey, Ltd., conducted a face-to-face survey of librarians in a broad cross-section of 76 libraries.…

  8. Database Transformations for Biological Applications

    SciTech Connect

    Overton, C.; Davidson, S. B.; Buneman, P.; Tannen, V.

    2001-04-11

    The goal of this project was to develop tools to facilitate data transformations between heterogeneous data sources found throughout biomedical applications. Such transformations are necessary when sharing data between different groups working on related problems as well as when querying data spread over different databases, files and software analysis packages.

  9. Interactive bibliographical database on color

    NASA Astrophysics Data System (ADS)

    Caivano, Jose L.

    2002-06-01

    The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.

  10. The Visual Double Star Database

    NASA Astrophysics Data System (ADS)

    Worley, Charles E.

    1997-03-01

    The collection of visual double star data in a systematic way began more than a century ago and has been continued regularly since that time. Thus, it forms the oldest extant database in astronomy. This contribution briefly reviews the history and highlights of the project, describes the current status and future prospects for this endeavor and comments on the current modes of data distribution.

  11. Begin: Online Database Searching Now!

    ERIC Educational Resources Information Center

    Lodish, Erica K.

    1986-01-01

    Because of the increasing importance of online databases, school library media specialists are encouraged to introduce students to online searching. Four books that would help media specialists gain a basic background are reviewed and it is noted that although they are very technical, they can be adapted to individual needs. (EM)

  12. Using Databases in History Teaching.

    ERIC Educational Resources Information Center

    Knight, P.; Timmins, G.

    1986-01-01

    Discusses advantages and limitations of database software in meeting the educational objectives of history instruction; reviews five currently available computer programs (FACTFILE, QUEST, QUARRY BANK 1851, Census Analysis, and Beta Base); highlights major considerations that arise in designing such programs; and describes their classroom use.…

  13. Berlin Emissivity Database (BED) Archive

    NASA Astrophysics Data System (ADS)

    D'Amore, M.; Helbert, J.; Maturilli, A.

    2009-03-01

    The Berlin Emissivity Database ranges from 3 to 50 µm. BED comprises several grain-sized mineral, up to high temperature, and has a modular structure, to collect in the future Raman measurement, samples pictures, thin section images and so on.

  14. The New NRL Crystallographic Database

    NASA Astrophysics Data System (ADS)

    Mehl, Michael; Curtarolo, Stefano; Hicks, David; Toher, Cormac; Levy, Ohad; Hart, Gus

    For many years the Naval Research Laboratory maintained an online graphical database of crystal structures for a wide variety of materials. This database has now been redesigned, updated and integrated with the AFLOW framework for high throughput computational materials discovery (http://materials.duke.edu/aflow.html). For each structure we provide an image showing the atomic positions; the primitive vectors of the lattice and the basis vectors of every atom in the unit cell; the space group and Wyckoff positions; Pearson symbols; common names; and Strukturbericht designations, where available. References for each structure are provided, as well as a Crystallographic Information File (CIF). The database currently includes almost 300 entries and will be continuously updated and expanded. It enables easy search of the various structures based on their underlying symmetries, either by Bravais lattice, Pearson symbol, Strukturbericht designation or commonly used prototypes. The talk will describe the features of the database, and highlight its utility for high throughput computational materials design. Work at NRL is funded by a Contract with the Duke University Department of Mechanical Engineering.

  15. Safeguarding Databases Basic Concepts Revisited.

    ERIC Educational Resources Information Center

    Cardinali, Richard

    1995-01-01

    Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)

  16. DED: Database of Evolutionary Distances.

    PubMed

    Veeramachaneni, Vamsi; Makalowski, Wojciech

    2005-01-01

    A large database of homologous sequence alignments with good estimates of evolutionary distances can be a valuable resource for molecular evolutionary studies and phylogenetic research in particular. We recently created a database containing 159,921 transcripts from human, mouse, rat, zebrafish and fugu species. Approximately 1,000 homology groups were identified with the help of Ensembl homology evidence. At the macro-level, the database allows us to answer queries of the form: 1. What is the average k-distance between 5' untranslated regions of human and mouse? 2. List the 10 groups with the highest K(a)/K(s) ratio between mouse and rat. 3. List all identical proteins between human and rat. Researchers interested in specific proteins can use a simple web interface to retrieve the homology groups of interest, examine all pairwise distances between members of the group and study the conservation of exon-intron gene structures using a graphical interface. The database is available at http://warta.bio.psu.edu/DED/.

  17. Statistical Profile of Currently Available CD-ROM Database Products.

    ERIC Educational Resources Information Center

    Nicholls, Paul Travis

    1988-01-01

    Survey of currently available CD-ROM products discusses: (1) subject orientation; (2) database type; (3) update frequency; (4) price structure; (5) hardware configuration; (6) retrieval software; and (7) publisher/marketer. Several graphs depict data in these areas. (five references) (MES)

  18. View discovery in OLAP databases through statistical combinatorial optimization

    SciTech Connect

    Hengartner, Nick W; Burke, John; Critchlow, Terence; Joslyn, Cliff; Hogan, Emilie

    2009-01-01

    OnLine Analytical Processing (OLAP) is a relational database technology providing users with rapid access to summary, aggregated views of a single large database, and is widely recognized for knowledge representation and discovery in high-dimensional relational databases. OLAP technologies provide intuitive and graphical access to the massively complex set of possible summary views available in large relational (SQL) structured data repositories. The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of 'views' of an OLAP database as a combinatorial object of all projections and subsets, and 'view discovery' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline 'hop-chaining' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a 'spiraling' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  19. FLOPROS: an evolving global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen C. J. H.; Jongman, Brenden; Bouwer, Laurens M.; Winsemius, Hessel C.; de Moel, Hans; Ward, Philip J.

    2016-05-01

    With projected changes in climate, population and socioeconomic activity located in flood-prone areas, the global assessment of flood risk is essential to inform climate change policy and disaster risk management. Whilst global flood risk models exist for this purpose, the accuracy of their results is greatly limited by the lack of information on the current standard of protection to floods, with studies either neglecting this aspect or resorting to crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, which comprises information in the form of the flood return period associated with protection measures, at different spatial scales. FLOPROS comprises three layers of information, and combines them into one consistent database. The design layer contains empirical information about the actual standard of existing protection already in place; the policy layer contains information on protection standards from policy regulations; and the model layer uses a validated modelling approach to calculate protection standards. The policy layer and the model layer can be considered adequate proxies for actual protection standards included in the design layer, and serve to increase the spatial coverage of the database. Based on this first version of FLOPROS, we suggest a number of strategies to further extend and increase the resolution of the database. Moreover, as the database is intended to be continually updated, while flood protection standards are changing with new interventions, FLOPROS requires input from the flood risk community. We therefore invite researchers and practitioners to contribute information to this evolving database by corresponding to the authors.

  20. Open access intrapartum CTG database

    PubMed Central

    2014-01-01

    Background Cardiotocography (CTG) is a monitoring of fetal heart rate and uterine contractions. Since 1960 it is routinely used by obstetricians to assess fetal well-being. Many attempts to introduce methods of automatic signal processing and evaluation have appeared during the last 20 years, however still no significant progress similar to that in the domain of adult heart rate variability, where open access databases are available (e.g. MIT-BIH), is visible. Based on a thorough review of the relevant publications, presented in this paper, the shortcomings of the current state are obvious. A lack of common ground for clinicians and technicians in the field hinders clinically usable progress. Our open access database of digital intrapartum cardiotocographic recordings aims to change that. Description The intrapartum CTG database consists in total of 552 intrapartum recordings, which were acquired between April 2010 and August 2012 at the obstetrics ward of the University Hospital in Brno, Czech Republic. All recordings were stored in electronic form in the OB TraceVue®;system. The recordings were selected from 9164 intrapartum recordings with clinical as well as technical considerations in mind. All recordings are at most 90 minutes long and start a maximum of 90 minutes before delivery. The time relation of CTG to delivery is known as well as the length of the second stage of labor which does not exceed 30 minutes. The majority of recordings (all but 46 cesarean sections) is – on purpose – from vaginal deliveries. All recordings have available biochemical markers as well as some more general clinical features. Full description of the database and reasoning behind selection of the parameters is presented in the paper. Conclusion A new open-access CTG database is introduced which should give the research community common ground for comparison of results on reasonably large database. We anticipate that after reading the paper, the reader will understand the

  1. Mars global digital dune database: MC-30

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2012-01-01

    The Mars Global Digital Dune Database (MGD3) provides data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey Open-File Reports. The first report (Hayward and others, 2007) included dune fields from lat 65° N. to 65° S. (http://pubs.usgs.gov/of/2007/1158/). The second report (Hayward and others, 2010) included dune fields from lat 60° N. to 90° N. (http://pubs.usgs.gov/of/2010/1170/). This report encompasses ~75,000 km2 of mapped dune fields from lat 60° to 90° S. The dune fields included in this global database were initially located using Mars Odyssey Thermal Emission Imaging System (THEMIS) Infrared (IR) images. In the previous two reports, some dune fields may have been unintentionally excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100 m/pixel) certainly caused us to exclude smaller dune fields. In this report, mapping is more complete. The Arizona State University THEMIS daytime IR mosaic provided complete IR coverage, and it is unlikely that we missed any large dune fields in the South Pole (SP) region. In addition, the increased availability of higher resolution images resulted in the inclusion of more small (~1 km2) sand dune fields and sand patches. To maintain consistency with the previous releases, we have identified the sand features that would not have been included in earlier releases. While the moderate to large dune fields in MGD3 are likely to constitute the largest compilation of sediment on the planet, we acknowledge that our database excludes numerous small dune fields and some moderate to large dune fields as well. Please note that the absence of mapped dune fields does not mean that dune fields do not exist and is not intended to imply a lack of saltating sand in other areas

  2. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  3. Coal database for Cook Inlet and North Slope, Alaska

    USGS Publications Warehouse

    Stricker, Gary D.; Spear, Brianne D.; Sprowl, Jennifer M.; Dietrich, John D.; McCauley, Michael I.; Kinney, Scott A.

    2011-01-01

    This database is a compilation of published and nonconfidential unpublished coal data from Alaska. Although coal occurs in isolated areas throughout Alaska, this study includes data only from the Cook Inlet and North Slope areas. The data include entries from and interpretations of oil and gas well logs, coal-core geophysical logs (such as density, gamma, and resistivity), seismic shot hole lithology descriptions, measured coal sections, and isolated coal outcrops.

  4. Contaminated sediments database for Long Island Sound and the New York Bight

    USGS Publications Warehouse

    Mecray, Ellen L.; Reid, Jamey M.; Hastings, Mary E.; Buchholtz ten Brink, Marilyn R.

    2003-01-01

    The Contaminated Sediments Database for Long Island Sound and the New York Bight provides a compilation of published and unpublished sediment texture and contaminant data. This report provides maps of several of the contaminants in the database as well as references and a section on using the data to assess the environmental status of these coastal areas. The database contains information collected between 1956-1997; providing an historical foundation for future contaminant studies in the region.

  5. Databases in geohazard science: An introduction

    NASA Astrophysics Data System (ADS)

    Klose, Martin; Damm, Bodo; Highland, Lynn M.

    2015-11-01

    The key to understanding hazards is to track, record, and analyse them. Geohazard databases play a critical role in each of these steps. As systematically compiled data archives of past and current hazard events, they generally fall in two categories (Tschoegl et al., 2006; UN-BCPR, 2013): (i) natural disaster databases that cover all types of hazards, most often at a continental or global scale (ADCR, 2015; CRED, 2015; Munich Re, 2015), and (ii) type-specific databases for a certain type of hazard, for example, earthquakes (Schulte and Mooney, 2005; Daniell et al., 2011), tsunami (NGDC/WDC, 2015), or volcanic eruptions (Witham, 2005; Geyer and Martí, 2008). With landslides being among the world's most frequent hazard types (Brabb, 1991; Nadim et al., 2006; Alcántara-Ayala, 2014), symbolizing the complexity of Earth system processes (Korup, 2012), the development of landslide inventories occupies centre stage since many years, especially in applied geomorphology (Alexander, 1991; Oya, 2001). As regards the main types of landslide inventories, a distinction is made between event-based and historical inventories (Hervás and Bobrowsky, 2009; Hervás, 2013). Inventories providing data on landslides caused by a single triggering event, for instance, an earthquake, a rainstorm, or a rapid snowmelt, are essential for exploring root causes in terms of direct system responses or cascades of hazards (Malamud et al., 2004; Mondini et al., 2014). Alternatively, historical inventories, which are more common than their counterparts, constitute a pool of data on landslides that occurred in a specific area at local, regional, national, or even global scale over time (Dikau et al., 1996; Guzzetti et al., 2012; Wood et al., 2015).

  6. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also

  7. Antarctic Tephra Database (AntT)

    NASA Astrophysics Data System (ADS)

    Kurbatov, A.; Dunbar, N. W.; Iverson, N. A.; Gerbi, C. C.; Yates, M. G.; Kalteyer, D.; McIntosh, W. C.

    2014-12-01

    Modern paleoclimate research is heavily dependent on establishing accurate timing related to rapid shifts in Earth's climate system. The ability to correlate these events at local, and ideally at the intercontinental scales, allows assessment, for example, of phasing or changes in atmospheric circulation. Tephra-producing volcanic eruptions are geologically instantaneous events that are largely independent of climate. We have developed a tephrochronological framework for paleoclimate research in Antarctic in a user friendly, freely accessible online Antarctic tephra (AntT) database (http://cci.um.maine.edu/AntT/). Information about volcanic events, including physical and geochemical characteristics of volcanic products collected from multiple data sources, are integrated into the AntT database.The AntT project establishes a new centralized data repository for Antarctic tephrochronology, which is needed for precise correlation of records between Antarctic ice cores (e.g. WAIS Divide, RICE, Talos Dome, ITASE) and global paleoclimate archives. The AntT will help climatologists, paleoclimatologists, atmospheric chemists, geochemists, climate modelers synchronize paleoclimate archives using volcanic products that establishing timing of climate events in different geographic areas, climate-forcing mechanisms, natural threshold levels in the climate system. All these disciplines will benefit from accurate reconstructions of the temporal and spatial distribution of past rapid climate change events in continental, atmospheric, marine and polar realms. Research is funded by NSF grants: ANT-1142007 and 1142069.

  8. The Condensate Database for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Gallaher, D. W.; Lv, Q.; Grant, G.; Campbell, G. G.; Liu, Q.

    2014-12-01

    Although massive amounts of cryospheric data have been and are being generated at an unprecedented rate, a vast majority of the otherwise valuable data have been ``sitting in the dark'', with very limited quality assurance or runtime access for higher-level data analytics such as anomaly detection. This has significantly hindered data-driven scientific discovery and advances in the polar research and Earth sciences community. In an effort to solve this problem we have investigated and developed innovative techniques for the construction of ``condensate database'', which is much smaller than the original data yet still captures the key characteristics (e.g., spatio-temporal norm and changes). In addition we are taking advantage of parallel databases that make use of low cost GPU processors. As a result, efficient anomaly detection and quality assurance can be achieved with in-memory data analysis or limited I/O requests. The challenges lie in the fact that cryospheric data are massive and diverse, with normal/abnomal patterns spanning a wide range of spatial and temporal scales. This project consists of investigations in three main areas: (1) adaptive neighborhood-based thresholding in both space and time; (2) compressive-domain pattern detection and change analysis; and (3) hybrid and adaptive condensation of multi-modal, multi-scale cryospheric data.

  9. An event database for rotational seismology

    NASA Astrophysics Data System (ADS)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  10. Using XML technology for the ontology-based semantic integration of life science databases.

    PubMed

    Philippi, Stephan; Köhler, Jacob

    2004-06-01

    Several hundred internet accessible life science databases with constantly growing contents and varying areas of specialization are publicly available via the internet. Database integration, consequently, is a fundamental prerequisite to be able to answer complex biological questions. Due to the presence of syntactic, schematic, and semantic heterogeneities, large scale database integration at present takes considerable efforts. As there is a growing apprehension of extensible markup language (XML) as a means for data exchange in the life sciences, this article focuses on the impact of XML technology on database integration in this area. In detail, a general architecture for ontology-driven data integration based on XML technology is introduced, which overcomes some of the traditional problems in this area. As a proof of concept, a prototypical implementation of this architecture based on a native XML database and an expert system shell is described for the realization of a real world integration scenario.

  11. Database for Assessment Unit-Scale Analogs (Exclusive of the United States)

    USGS Publications Warehouse

    Charpentier, Ronald R.; Klett, T.R.; Attanasi, E.D.

    2008-01-01

    This publication presents a database of geologic analogs useful for the assessment of undiscovered oil and gas resources. Particularly in frontier areas, where few oil and gas fields have been discovered, assessment methods such as discovery process models may not be usable. In such cases, comparison of the assessment area to geologically similar but more maturely explored areas may be more appropriate. This analog database consists of 246 assessment units, based on the U.S. Geological Survey 2000 World Petroleum Assessment. Besides geologic data to facilitate comparisons, the database includes data pertaining to numbers and sizes of oil and gas fields and the properties of their produced fluids.

  12. Construction of an integrated database to support genomic sequence analysis

    SciTech Connect

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  13. The Ribosomal Database Project (RDP).

    PubMed Central

    Maidak, B L; Olsen, G J; Larsen, N; Overbeek, R; McCaughey, M J; Woese, C R

    1996-01-01

    The Ribosomal Database Project (RDP) is a curated database that offers ribosome-related data, analysis services and associated computer programs. The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams and various software for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (rdp.life.uiuc.edu), electronic mail (server@rdp.life.uiuc.edu), gopher (rdpgopher.life.uiuc.edu) and World Wide Web (WWW)(http://rdpwww.life.uiuc.edu/). The electronic mail and WWW servers provide ribosomal probe checking, screening for possible chimeric rRNA sequences, automated alignment and approximate phylogenetic placement of user-submitted sequences on an existing phylogenetic tree. PMID:8594608

  14. The MAJORANA Parts Tracking Database

    NASA Astrophysics Data System (ADS)

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  15. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  16. The National Land Cover Database

    USGS Publications Warehouse

    Homer, Collin H.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  17. Aero/fluids database system

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Violett, Duane L., Jr.

    1991-01-01

    The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.

  18. The Majorana Parts Tracking Database

    SciTech Connect

    Abgrall, N.

    2015-01-16

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  19. The Majorana Parts Tracking Database

    DOE PAGES

    Abgrall, N.

    2015-01-16

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation.more » In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.« less

  20. The Ribosomal Database Project (RDP).

    PubMed

    Maidak, B L; Olsen, G J; Larsen, N; Overbeek, R; McCaughey, M J; Woese, C R

    1996-01-01

    The Ribosomal Database Project (RDP) is a curated database that offers ribosome-related data, analysis services and associated computer programs. The offerings include phylogenetically ordered alignments of ribosomal RNA (rRNA) sequences, derived phylogenetic trees, rRNA secondary structure diagrams and various software for handling, analyzing and displaying alignments and trees. The data are available via anonymous ftp (rdp.life.uiuc.edu), electronic mail (server@rdp.life.uiuc.edu), gopher (rdpgopher.life.uiuc.edu) and World Wide Web (WWW)(http://rdpwww.life.uiuc.edu/). The electronic mail and WWW servers provide ribosomal probe checking, screening for possible chimeric rRNA sequences, automated alignment and approximate phylogenetic placement of user-submitted sequences on an existing phylogenetic tree.

  1. The RECONS 25 Parsec Database

    NASA Astrophysics Data System (ADS)

    Henry, Todd J.; Jao, Wei-Chun; Pewett, Tiffany; Riedel, Adric R.; Silverstein, Michele L.; Slatten, Kenneth J.; Winters, Jennifer G.; Recons Team

    2015-01-01

    The REsearch Consortium On Nearby Stars (RECONS, www.recons.org) Team has been mapping the solar neighborhood since 1994. Nearby stars provide the fundamental framework upon which all of stellar astronomy is based, both for individual stars and stellar populations. The nearest stars are also the primary targets for extrasolar planet searches, and will undoubtedly play key roles in understanding the prevalence and structure of solar systems, and ultimately, in our search for life elsewhere.We have built the RECONS 25 Parsec Database to encourage and enable exploration of the Sun's nearest neighbors. The Database, slated for public release in 2015, contains 3088 stars, brown dwarfs, andexoplanets in 2184 systems as of October 1, 2014. All of these systems have accurate trigonometric parallaxes in the refereed literature placing them closer than 25.0 parsecs, i.e., parallaxes greater than 40 mas with errors less than 10 mas. Carefully vetted astrometric, photometric, and spectroscopic data are incorporated intothe Database from reliable sources, including significant original data collected by members of the RECONS Team.Current exploration of the solar neighborhood by RECONS, enabled by the Database, focuses on the ubiquitous red dwarfs, including: assessing the stellar companion population of ~1200 red dwarfs (Winters), investigating the astrophysical causes that spread red dwarfs of similar temperatures by a factor of 16 in luminosity (Pewett), and canvassing ~3000 red dwarfs for excess emission due to unseen companions and dust (Silverstein). In addition, a decade long astrometric survey of ~500 red dwarfs in the southern sky has begun, in an effort to understand the stellar, brown dwarf, and planetary companion populations for the stars that make up at least 75% of all stars in the Universe.This effort has been supported by the NSF through grants AST-0908402, AST-1109445, and AST-1412026, and via observations made possible by the SMARTS Consortium.

  2. Stockpile Dismantlement Database Training Materials

    SciTech Connect

    Not Available

    1993-11-01

    This document, the Stockpile Dismantlement Database (SDDB) training materials is designed to familiarize the user with the SDDB windowing system and the data entry steps for Component Characterization for Disposition. The foundation of information required for every part is depicted by using numbered graphic and text steps. The individual entering data is lead step by step through generic and specific examples. These training materials are intended to be supplements to individual on-the-job training.

  3. GOLD: The Genomes Online Database

    DOE Data Explorer

    Kyrpides, Nikos; Liolios, Dinos; Chen, Amy; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor; Bernal, Alex

    Since its inception in 1997, GOLD has continuously monitored genome sequencing projects worldwide and has provided the community with a unique centralized resource that integrates diverse information related to Archaea, Bacteria, Eukaryotic and more recently Metagenomic sequencing projects. As of September 2007, GOLD recorded 639 completed genome projects. These projects have their complete sequence deposited into the public archival sequence databases such as GenBank EMBL,and DDBJ. From the total of 639 complete and published genome projects as of 9/2007, 527 were bacterial, 47 were archaeal and 65 were eukaryotic. In addition to the complete projects, there were 2158 ongoing sequencing projects. 1328 of those were bacterial, 59 archaeal and 771 eukaryotic projects. Two types of metadata are provided by GOLD: (i) project metadata and (ii) organism/environment metadata. GOLD CARD pages for every project are available from the link of every GOLD_STAMP ID. The information in every one of these pages is organized into three tables: (a) Organism information, (b) Genome project information and (c) External links. [The Genomes On Line Database (GOLD) in 2007: Status of genomic and metagenomic projects and their associated metadata, Konstantinos Liolios, Konstantinos Mavromatis, Nektarios Tavernarakis and Nikos C. Kyrpides, Nucleic Acids Research Advance Access published online on November 2, 2007, Nucleic Acids Research, doi:10.1093/nar/gkm884]

    The basic tables in the GOLD database that can be browsed or searched include the following information:

    • Gold Stamp ID
    • Organism name
    • Domain
    • Links to information sources
    • Size and link to a map, when available
    • Chromosome number, Plas number, and GC content
    • A link for downloading the actual genome data
    • Institution that did the sequencing
    • Funding source
    • Database where information resides
    • Publication status and information

    • Central Asia Active Fault Database

      NASA Astrophysics Data System (ADS)

      Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

      2014-05-01

      The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

    • Capabilities of the HYPERLEDA database

      NASA Astrophysics Data System (ADS)

      Vauglin, I.; Prugniel, P.; Courtois, H.; Makarov, D.; Petit, C.; Mamon, G.; Paturel, G.

      2006-06-01

      Born in 1999 from the convergence between the LEDA and HYPERCAT, the extragalactic database HyperLeda is the result of a collaboration between Observatoire de Lyon, Observatoire de Paris, Sternberg Astronomical Institute of the Moscow Stale University and the Department of Astronomy of Sofia University. The project has been supported by the Programme National Galaxies (PNG). Available through 8 mirror interfaces automatically maintained, HyperLeda is distributed worldwide and now integrated in the Virtual Observatory.

    • FORMIDABEL: The Belgian Ants Database

      PubMed Central

      Brosens, Dimitri; Vankerkhoven, François; Ignace, David; Wegnez, Philippe; Noé, Nicolas; Heughebaert, André; Bortels, Jeannine; Dekoninck, Wouter

      2013-01-01

      Abstract FORMIDABEL is a database of Belgian Ants containing more than 27.000 occurrence records. These records originate from collections, field sampling and literature. The database gives information on 76 native and 9 introduced ant species found in Belgium. The collection records originated mainly from the ants collection in Royal Belgian Institute of Natural Sciences (RBINS), the ‘Gaspar’ Ants collection in Gembloux and the zoological collection of the University of Liège (ULG). The oldest occurrences date back from May 1866, the most recent refer to August 2012. FORMIDABEL is a work in progress and the database is updated twice a year. The latest version of the dataset is publicly and freely accessible through this url: http://ipt.biodiversity.be/resource.do?r=formidabel. The dataset is also retrievable via the GBIF data portal through this link: http://data.gbif.org/datasets/resource/14697 A dedicated geo-portal, developed by the Belgian Biodiversity Platform is accessible at: http://www.formicidae-atlas.be Purpose: FORMIDABEL is a joint cooperation of the Flemish ants working group “Polyergus” (http://formicidae.be) and the Wallonian ants working group “FourmisWalBru” (http://fourmiswalbru.be). The original database was created in 2002 in the context of the preliminary red data book of Flemish Ants (Dekoninck et al. 2003). Later, in 2005, data from the Southern part of Belgium; Wallonia and Brussels were added. In 2012 this dataset was again updated for the creation of the first Belgian Ants Atlas (Figure 1) (Dekoninck et al. 2012). The main purpose of this atlas was to generate maps for all outdoor-living ant species in Belgium using an overlay of the standard Belgian ecoregions. By using this overlay for most species, we can discern a clear and often restricted distribution pattern in Belgium, mainly based on vegetation and soil types. PMID:23794918

    • A database for propagation models

      NASA Technical Reports Server (NTRS)

      Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

      1995-01-01

      A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

    • CD-ROM-aided Databases

      NASA Astrophysics Data System (ADS)

      Kitamura, Masami

      Nichigai Associates Inc. has begun information services to publish text databases on CD-ROM. In chapter 2, outline of these services and the publication plan of this fiscal year are described. In chapter 3, CD-ROM logical file format common to these services, software to generate files conformed to the format, and software to retrieve CD-ROM files by personal computers are also described.

    • The EcoCyc Database

      PubMed Central

      Karp, Peter D.; Riley, Monica; Saier, Milton; Paulsen, Ian T.; Collado-Vides, Julio; Paley, Suzanne M.; Pellegrini-Toole, Alida; Bonavides, César; Gama-Castro, Socorro

      2002-01-01

      EcoCyc is an organism-specific pathway/genome database that describes the metabolic and signal-transduction pathways of Escherichia coli, its enzymes, its transport proteins and its mechanisms of transcriptional control of gene expression. EcoCyc is queried using the Pathway Tools graphical user interface, which provides a wide variety of query operations and visualization tools. EcoCyc is available at http://ecocyc.org/. PMID:11752253

    • BSD: the Biodegradative Strain Database.

      PubMed

      Urbance, John W; Cole, James; Saxman, Paul; Tiedje, James M

      2003-01-01

      The Biodegradative Strain Database (BSD) is a freely-accessible, web-based database providing detailed information on degradative bacteria and the hazardous substances that they degrade, including corresponding literature citations, relevant patents and links to additional web-based biological and chemical data. The BSD (http://bsd.cme.msu.edu) is being developed within the phylogenetic framework of the Ribosomal Database Project II (RDPII: http://rdp.cme.msu.edu/html) to provide a biological complement to the chemical and degradative pathway data of the University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD: http://umbbd.ahc.umn.edu). Data is accessible through a series of strain, chemical and reference lists or by keyword search. The web site also includes on-line data submission and user survey forms to solicit user contributions and suggestions. The current release contains information on over 250 degradative bacterial strains and 150 hazardous substances. The transformation of xenobiotics and other environmentally toxic compounds by microorganisms is central to strategies for biocatalysis and the bioremediation of contaminated environments. However, practical, comprehensive, strain-level information on biocatalytic/biodegradative microbes is not readily available and is often difficult to compile. Similarly, for any given environmental contaminant, there is no single resource that can provide comparative information on the array of identified microbes capable of degrading the chemical. A web site that consolidates and cross-references strain, chemical and reference data related to biocatalysis, biotransformation, biodegradation and bioremediation would be an invaluable tool for academic and industrial researchers and environmental engineers.

    • The relational clinical database: a possible solution to the star wars in registry systems.

      PubMed

      Michels, D K; Zamieroski, M

      1990-12-01

      In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

    • Data-Based Decisions Guidelines for Teachers of Students with Severe Intellectual and Developmental Disabilities

      ERIC Educational Resources Information Center

      Jimenez, Bree A.; Mims, Pamela J.; Browder, Diane M.

      2012-01-01

      Effective practices in student data collection and implementation of data-based instructional decisions are needed for all educators, but are especially important when students have severe intellectual and developmental disabilities. Although research in the area of data-based instructional decisions for students with severe disabilities shows…

    • Using an Online Games-Based Learning Approach to Teach Database Design Concepts

      ERIC Educational Resources Information Center

      Connolly, Thomas M.; Stansfield, Mark; McLellan, Evelyn

      2006-01-01

      The study of database systems is typically core in undergraduate and postgraduate programmes related to computer science and information systems. However, one component of this curriculum that many learners have difficulty with is database analysis and design, an area that is critical to the development of modern information systems. This paper…

    • HMDB: the Human Metabolome Database

      PubMed Central

      Wishart, David S.; Tzur, Dan; Knox, Craig; Eisner, Roman; Guo, An Chi; Young, Nelson; Cheng, Dean; Jewell, Kevin; Arndt, David; Sawhney, Summit; Fung, Chris; Nikolai, Lisa; Lewis, Mike; Coutouly, Marie-Aude; Forsythe, Ian; Tang, Peter; Shrivastava, Savita; Jeroncic, Kevin; Stothard, Paul; Amegbey, Godwin; Block, David; Hau, David. D.; Wagner, James; Miniaci, Jessica; Clements, Melisa; Gebremedhin, Mulu; Guo, Natalie; Zhang, Ying; Duggan, Gavin E.; MacInnis, Glen D.; Weljie, Alim M.; Dowlatabadi, Reza; Bamforth, Fiona; Clive, Derrick; Greiner, Russ; Li, Liang; Marrie, Tom; Sykes, Brian D.; Vogel, Hans J.; Querengesser, Lori

      2007-01-01

      The Human Metabolome Database (HMDB) is currently the most complete and comprehensive curated collection of human metabolite and human metabolism data in the world. It contains records for more than 2180 endogenous metabolites with information gathered from thousands of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the HMDB also contains an extensive collection of experimental metabolite concentration data compiled from hundreds of mass spectra (MS) and Nuclear Magnetic resonance (NMR) metabolomic analyses performed on urine, blood and cerebrospinal fluid samples. This is further supplemented with thousands of NMR and MS spectra collected on purified, reference metabolites. Each metabolite entry in the HMDB contains an average of 90 separate data fields including a comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, biofluid concentrations, disease associations, pathway information, enzyme data, gene sequence data, SNP and mutation data as well as extensive links to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided. The HMDB is designed to address the broad needs of biochemists, clinical chemists, physicians, medical geneticists, nutritionists and members of the metabolomics community. The HMDB is available at: PMID:17202168

    • MINT: a Molecular INTeraction database.

      PubMed

      Zanzoni, Andreas; Montecchi-Palazzi, Luisa; Quondam, Michele; Ausiello, Gabriele; Helmer-Citterich, Manuela; Cesareni, Gianni

      2002-02-20

      Protein interaction databases represent unique tools to store, in a computer readable form, the protein interaction information disseminated in the scientific literature. Well organized and easily accessible databases permit the easy retrieval and analysis of large interaction data sets. Here we present MINT, a database (http://cbm.bio.uniroma2.it/mint/index.html) designed to store data on functional interactions between proteins. Beyond cataloguing binary complexes, MINT was conceived to store other types of functional interactions, including enzymatic modifications of one of the partners. Release 1.0 of MINT focuses on experimentally verified protein-protein interactions. Both direct and indirect relationships are considered. Furthermore, MINT aims at being exhaustive in the description of the interaction and, whenever available, information about kinetic and binding constants and about the domains participating in the interaction is included in the entry. MINT consists of entries extracted from the scientific literature by expert curators assisted by 'MINT Assistant', a software that targets abstracts containing interaction information and presents them to the curator in a user-friendly format. The interaction data can be easily extracted and viewed graphically through 'MINT Viewer'. Presently MINT contains 4568 interactions, 782 of which are indirect or genetic interactions.

    • Italian Rett database and biobank.

      PubMed

      Sampieri, Katia; Meloni, Ilaria; Scala, Elisa; Ariani, Francesca; Caselli, Rossella; Pescucci, Chiara; Longo, Ilaria; Artuso, Rosangela; Bruttini, Mirella; Mencarelli, Maria Antonietta; Speciale, Caterina; Causarano, Vincenza; Hayek, Giuseppe; Zappella, Michele; Renieri, Alessandra; Mari, Francesca

      2007-04-01

      Rett syndrome is the second most common cause of severe mental retardation in females, with an incidence of approximately 1 out of 10,000 live female births. In addition to the classic form, a number of Rett variants have been described. MECP2 gene mutations are responsible for about 90% of classic cases and for a lower percentage of variant cases. Recently, CDKL5 mutations have been identified in the early onset seizures variant and other atypical Rett patients. While the high percentage of MECP2 mutations in classic patients supports the hypothesis of a single disease gene, the low frequency of mutated variant cases suggests genetic heterogeneity. Since 1998, we have performed clinical evaluation and molecular analysis of a large number of Italian Rett patients. The Italian Rett Syndrome (RTT) database has been developed to share data and samples of our RTT collection with the scientific community (http://www.biobank.unisi.it). This is the first RTT database that has been connected with a biobank. It allows the user to immediately visualize the list of available RTT samples and, using the "Search by" tool, to rapidly select those with specific clinical and molecular features. By contacting bank curators, users can request the samples of interest for their studies. This database encourages collaboration projects with clinicians and researchers from around the world and provides important resources that will help to better define the pathogenic mechanisms underlying Rett syndrome.

    • The RIKEN integrated database of mammals

      PubMed Central

      Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

      2011-01-01

      The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN’s original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists’ Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information. PMID:21076152

    • The RIKEN integrated database of mammals.

      PubMed

      Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro

      2011-01-01

      The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.

    • Karst database development in Minnesota: Design and data assembly

      USGS Publications Warehouse

      Gao, Y.; Alexander, E.C.; Tipping, R.G.

      2005-01-01

      The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

    • Current research status, databases and application of single nucleotide polymorphism.

      PubMed

      Javed, R; Mukesh

      2010-07-01

      Single Nucleotide Polymorphisms (SNPs) are the most frequent form of DNA variation in the genome. SNPs are genetic markers which are bi-allelic in nature and grow at a very fast rate. Current genomic databases contain information on several million SNPs. More than 6 million SNPs have been identified and the information is publicly available through the efforts of the SNP Consortium and others data bases. The NCBI plays a major role in facillating the identification and cataloging of SNPs through creation and maintenance of the public SNP database (dbSNP) by the biomedical community worldwide and stimulate many areas of biological research including the identification of the genetic components of disease. In this review article, we are compiling the existing SNP databases, research status and their application. PMID:21717869

  1. View Discovery in OLAP Databases through Statistical Combinatorial Optimization

    SciTech Connect

    Joslyn, Cliff A.; Burke, Edward J.; Critchlow, Terence J.

    2009-05-01

    The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of ``views'' of an OLAP database as a combinatorial object of all projections and subsets, and ``view discovery'' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline ``hop-chaining'' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a ``spiraling'' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  2. Reef Ecosystem Services and Decision Support Database

    EPA Science Inventory

    This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...

  3. Diet History Questionnaire: Database Revision History

    Cancer.gov

    The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

  4. Quantum search of a real unstructured database

    NASA Astrophysics Data System (ADS)

    Broda, Bogusław

    2016-02-01

    A simple circuit implementation of the oracle for Grover's quantum search of a real unstructured classical database is proposed. The oracle contains a kind of quantumly accessible classical memory, which stores the database.

  5. Integrated Primary Care Information Database (IPCI)

    Cancer.gov

    The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.

  6. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  7. Active fault database of Japan: Its construction and search system

    NASA Astrophysics Data System (ADS)

    Yoshioka, T.; Miyamoto, F.

    2011-12-01

    The Active fault database of Japan was constructed by the Active Fault and Earthquake Research Center, GSJ/AIST and opened to the public on the Internet from 2005 to make a probabilistic evaluation of the future faulting event and earthquake occurrence on major active faults in Japan. The database consists of three sub-database, 1) sub-database on individual site, which includes long-term slip data and paleoseismicity data with error range and reliability, 2) sub-database on details of paleoseismicity, which includes the excavated geological units and faulting event horizons with age-control, 3) sub-database on characteristics of behavioral segments, which includes the fault-length, long-term slip-rate, recurrence intervals, most-recent-event, slip per event and best-estimate of cascade earthquake. Major seismogenic faults, those are approximately the best-estimate segments of cascade earthquake, each has a length of 20 km or longer and slip-rate of 0.1m/ky or larger and is composed from about two behavioral segments in average, are included in the database. This database contains information of active faults in Japan, sorted by the concept of "behavioral segments" (McCalpin, 1996). Each fault is subdivided into 550 behavioral segments based on surface trace geometry and rupture history revealed by paleoseismic studies. Behavioral segments can be searched on the Google Maps. You can select one behavioral segment directly or search segments in a rectangle area on the map. The result of search is shown on a fixed map or the Google Maps with information of geologic and paleoseismic parameters including slip rate, slip per event, recurrence interval, and calculated rupture probability in the future. Behavioral segments can be searched also by name or combination of fault parameters. All those data are compiled from journal articles, theses, and other documents. We are currently developing a revised edition, which is based on an improved database system. More than ten

  8. Semantic models in medical record data-bases.

    PubMed

    Cerutti, S

    1980-01-01

    A great effort has been recently made in the area of data-base design in a number of application fields (banking, insurance, travel, etc.). Yet, it is the current experience of computer scientists in the medical field that medical record information-processing requires less rigid and more complete definition of data-base specifications for a much more heterogeneous set of data, for different users who have different aims. Hence, it is important to state that the data-base in the medical field ought to be a model of the environment for which it was created, rather than just a collection of data. New more powerful and more flexible data-base models are being now designed, particularly in the USA, where the current trend in medicine is to implement, in the same structure, the connection among more different and specific users and the data-base (for administrative aims, medical care control, treatments, statistical and epidemiological results, etc.). In such a way the single users are able to talk with the data-base without interfering with one another. The present paper outlines that this multi-purpose flexibility can be achieved by improving mainly the capabilities of the data-base model. This concept allows the creation of procedures of semantic integrity control which will certainly have in the future a dramatic impact on important management features, starting from data-quality checking and non-physiological state detections, as far as more medical-oriented procedures like drug interactions, record surveillance and medical care review. That is especially true when a large amount of data are to be processed and the classical hierarchical and network data models are no longer sufficient for developing satisfactory and reliable automatic procedures. In this regard, particular emphasis will be dedicated to the relational model and, at the highest level, to the same semantic data model.

  9. NORPERM, the Norwegian Permafrost Database - a TSP NORWAY IPY legacy

    NASA Astrophysics Data System (ADS)

    Juliussen, H.; Christiansen, H. H.; Strand, G. S.; Iversen, S.; Midttømme, K.; Rønning, J. S.

    2010-10-01

    NORPERM, the Norwegian Permafrost Database, was developed at the Geological Survey of Norway during the International Polar Year (IPY) 2007-2009 as the main data legacy of the IPY research project Permafrost Observatory Project: A Contribution to the Thermal State of Permafrost in Norway and Svalbard (TSP NORWAY). Its structural and technical design is described in this paper along with the ground temperature data infrastructure in Norway and Svalbard, focussing on the TSP NORWAY permafrost observatory installations in the North Scandinavian Permafrost Observatory and Nordenskiöld Land Permafrost Observatory, being the primary data providers of NORPERM. Further developments of the database, possibly towards a regional database for the Nordic area, are also discussed. The purpose of NORPERM is to store ground temperature data safely and in a standard format for use in future research. The IPY data policy of open, free, full and timely release of IPY data is followed, and the borehole metadata description follows the Global Terrestrial Network for Permafrost (GTN-P) standard. NORPERM is purely a temperature database, and the data is stored in a relation database management system and made publically available online through a map-based graphical user interface. The datasets include temperature time series from various depths in boreholes and from the air, snow cover, ground-surface or upper ground layer recorded by miniature temperature data-loggers, and temperature profiles with depth in boreholes obtained by occasional manual logging. All the temperature data from the TSP NORWAY research project is included in the database, totalling 32 temperature time series from boreholes, 98 time series of micrometeorological temperature conditions, and 6 temperature depth profiles obtained by manual logging in boreholes. The database content will gradually increase as data from previous and future projects are added. Links to near real-time permafrost temperatures, obtained

  10. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  11. An Internet enabled impact limiter material database

    SciTech Connect

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-09-01

    This paper presents a detailed explanation of the construction of an interest enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The technique used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience.

  12. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  13. Speech Databases of Typical Children and Children with SLI

    PubMed Central

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children’s speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children’s speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children’s speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing. PMID:26963508

  14. Database Systems. Course Three. Information Systems Curriculum.

    ERIC Educational Resources Information Center

    O'Neil, Sharon Lund; Everett, Donna R.

    This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…

  15. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  16. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  17. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  18. Information Literacy Skills: Comparing and Evaluating Databases

    ERIC Educational Resources Information Center

    Grismore, Brian A.

    2012-01-01

    The purpose of this database comparison is to express the importance of teaching information literacy skills and to apply those skills to commonly used Internet-based research tools. This paper includes a comparison and evaluation of three databases (ProQuest, ERIC, and Google Scholar). It includes strengths and weaknesses of each database based…

  19. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  20. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  1. Annual Review of Database Developments 1991.

    ERIC Educational Resources Information Center

    Basch, Reva

    1991-01-01

    Review of developments in databases highlights a new emphasis on accessibility. Topics discussed include the internationalization of databases; databases that deal with finance, drugs, and toxic waste; access to public records, both personal and corporate; media online; reducing large files of data to smaller, more manageable files; and…

  2. Automated database design technology and tools

    NASA Technical Reports Server (NTRS)

    Shen, Stewart N. T.

    1988-01-01

    The Automated Database Design Technology and Tools research project results are summarized in this final report. Comments on the state of the art in various aspects of database design are provided, and recommendations made for further research for SNAP and NAVMASSO future database applications.

  3. Conceptual Design of a Prototype LSST Database

    SciTech Connect

    Nikolaev, S; Huber, M E; Cook, K H; Abdulla, G; Brase, J

    2004-10-07

    This document describes a preliminary design for Prototype LSST Database (LSST DB). They identify key components and data structures and provide an expandable conceptual schema for the database. The authors discuss the potential user applications and post-processing algorithm to interact with the database, and give a set of example queries.

  4. Frame-Based Approach To Database Management

    NASA Astrophysics Data System (ADS)

    Voros, Robert S.; Hillman, Donald J.; Decker, D. Richard; Blank, Glenn D.

    1989-03-01

    Practical knowledge-based systems need to reason in terms of knowledge that is already available in databases. This type of knowledge is usually represented as tables acquired from external databases and published reports. Knowledge based systems provide a means for reasoning about entities at a higher level of abstraction. What is needed in many of today's expert systems is a link between the knowledge base and external databases. One such approach is a frame-based database management system. Package Expert (PEx) designs packages for integrated circuits. The thrust of our work is to bring together diverse technologies, data and design knowledge in a coherent system. PEx uses design rules to reason about properties of chips and potential packages, including dimensions, possible materials and packaging requirements. This information is available in existing databases. PEx needs to deal with the following types of information consistently: material databases which are in several formats; technology databases, also in several formats; and parts files which contain dimensional information. It is inefficient and inelegant to have rules access the database directly. Instead, PEx uses a frame-based hierarchical knowledge management approach to databases. Frames serve as the interface between rule-based knowledge and databases. We describe PEx and the use of frames in database retrieval. We first give an overview and the design evolution of the expert system. Next, we describe the system implementation. Finally, we describe how the rules in the expert system access the databases via frames.

  5. Emission Database for Global Atmospheric Research (EDGAR).

    ERIC Educational Resources Information Center

    Olivier, J. G. J.; And Others

    1994-01-01

    Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)

  6. Development of soybean gene expression database (SGED)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Large volumes of microarray expression data is a challenge for analysis. To address this problem a web-based database, Soybean Expression Database (SGED) was built, using PERL/CGI, C and an ORACLE database management system. SGED contains three components. The Data Mining component serves as a repos...

  7. Automating Relational Database Design for Microcomputer Users.

    ERIC Educational Resources Information Center

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  8. Implementing a Web-Accessible Database.

    ERIC Educational Resources Information Center

    Draffan, E. A. B.; Corbett, Robbie

    2001-01-01

    Discusses the development and implementation of the United Kingdom's National Internet Accessibility Database (NIAD), how the design of the database was based on ease of use by both its target audience in higher education and those working on the database, and the approaches taken to ensure the successful implementation of the NIAD. (Author/LRW)

  9. Online Search Patterns: NLM CATLINE Database.

    ERIC Educational Resources Information Center

    Tolle, John E.; Hah, Sehchang

    1985-01-01

    Presents analysis of online search patterns within user searching sessions of National Library of Medicine ELHILL system and examines user search patterns on the CATLINE database. Data previously analyzed on MEDLINE database for same period is used to compare the performance parameters of different databases within the same information system.…

  10. Enhancing the DNA Patent Database

    SciTech Connect

    Walters, LeRoy B.

    2008-02-18

    Final Report on Award No. DE-FG0201ER63171 Principal Investigator: LeRoy B. Walters February 18, 2008 This project successfully completed its goal of surveying and reporting on the DNA patenting and licensing policies at 30 major U.S. academic institutions. The report of survey results was published in the January 2006 issue of Nature Biotechnology under the title “The Licensing of DNA Patents by US Academic Institutions: An Empirical Survey.” Lori Pressman was the lead author on this feature article. A PDF reprint of the article will be submitted to our Program Officer under separate cover. The project team has continued to update the DNA Patent Database on a weekly basis since the conclusion of the project. The database can be accessed at dnapatents.georgetown.edu. This database provides a valuable research tool for academic researchers, policymakers, and citizens. A report entitled Reaping the Benefits of Genomic and Proteomic Research: Intellectual Property Rights, Innovation, and Public Health was published in 2006 by the Committee on Intellectual Property Rights in Genomic and Protein Research and Innovation, Board on Science, Technology, and Economic Policy at the National Academies. The report was edited by Stephen A. Merrill and Anne-Marie Mazza. This report employed and then adapted the methodology developed by our research project and quoted our findings at several points. (The full report can be viewed online at the following URL: http://www.nap.edu/openbook.php?record_id=11487&page=R1). My colleagues and I are grateful for the research support of the ELSI program at the U.S. Department of Energy.

  11. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Hasegawa, Tamae; Osanai, Masaaki

    This paper focuses on practical examples for using the CD-ROM version of Books In Print Plus, a database of book information produced by R. R. Bowker. The paper details the contents, installation and operation procedures, hardware requirements, search functions, search items, print commands, and special features of Books in Print Plus. The paper also includes an evaluation of this product based on four examples from actual office use. The paper concludes with a brief introduction to Ulrich’s Plus, a similar CD-ROM product for periodical information.

  12. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system.

  13. Creating databases for biological information: an introduction.

    PubMed

    Stein, Lincoln

    2013-06-01

    The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. PMID:23749755

  14. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    SciTech Connect

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L.; Loftis, J.P.; Shipe, P.C.; Truett, L.F.

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  15. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  16. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; Letsche, Todd A.

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  17. Reinventing the National Topographic Database

    NASA Astrophysics Data System (ADS)

    Jakobsson, A.; Ilves, R.

    2016-06-01

    The National Land Survey (NLS) has had a digital topographic database (TDB) since 1992. Many of its features are based on the Basic Map created by M. Kajamaa in 1947, mapping first completed in 1977. The basis for the renewal of the TDB begun by investigating the value of the TDB, a study made by the Aalto University in 2014 and a study on the new TDB system 2030 published by the Ministry of Agriculture in 2015. As a result of these studies the NLS set up a programme for creating a new National Topographic Database (NTDB) in beginning of 2015. First new version should be available in 2019. The new NTDB has following key features: 1) it is based on processes where data is naturally maintained, 2) it is quality managed, 3) it has persistent Ids, 4) it supports 3D, 4D, 5) it is based on standards. The technical architecture is based on interoperable modules. A website for following the development of the NTDB can be accessed for more information: http://kmtk.maanmittauslaitos.fi/.

  18. Cooperative answers in database systems

    NASA Technical Reports Server (NTRS)

    Gaasterland, Terry; Godfrey, Parke; Minker, Jack; Novik, Lev

    1993-01-01

    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system.

  19. MPW : the metabolic pathways database.

    SciTech Connect

    Selkov, E., Jr.; Grechkin, Y.; Mikhailova, N.; Selkov, E.; Mathematics and Computer Science; Russian Academy of Sciences

    1998-01-01

    The Metabolic Pathways Database (MPW) (www.biobase.com/emphome.html/homepage. html.pags/pathways.html) a derivative of EMP (www.biobase.com/EMP) plays a fundamental role in the technology of metabolic reconstructions from sequenced genomes under the PUMA (www.mcs.anl.gov/home/compbio/PUMA/Production/ ReconstructedMetabolism/reconstruction.html), WIT (www.mcs.anl.gov/home/compbio/WIT/wit.html ) and WIT2 (beauty.isdn.msc.anl.gov/WIT2.pub/CGI/user.cgi) systems. In October 1997, it included some 2800 pathway diagrams covering primary and secondary metabolism, membrane transport, signal transduction pathways, intracellular traffic, translation and transcription. In the current public release of MPW (beauty.isdn.mcs.anl.gov/MPW), the encoding is based on the logical structure of the pathways and is represented by the objects commonly used in electronic circuit design. This facilitates drawing and editing the diagrams and makes possible automation of the basic simulation operations such as deriving stoichiometric matrices, rate laws, and, ultimately, dynamic models of metabolic pathways. Individual pathway diagrams, automatically derived from the original ASCII records, are stored as SGML instances supplemented by relational indices. An auxiliary database of compound names and structures, encoded in the SMILES format, is maintained to unambiguously connect the pathways to the chemical structures of their intermediates.

  20. Splatalogue: Database for Astronomical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Remijan, Anthony J.; Markwick-Kemper, A.; ALMA Working Group on Spectral Line Frequencies

    2007-12-01

    The next generation of powerful radio and millimeter/submillimeter observatories (e.g. EVLA, ALMA, & Herschel) require extensive resources to help identify spectral line transitions. We describe the compilation of the most complete spectral line database currently assembled for this purpose. The Splatalogue is a comprehensive transition-resolved compilation of observed, measured and calculated spectral lines. In addition to the JPL and CDMS spectral line lists, 229,221 new/updated lines from the Spectral Line Atlas of Interstellar Molecules (SLAIM) were included. Of that, 12,332 lines (or an addition of 2000 lines) were added to the Lovas/NIST Recommended Rest Frequencies of known astronomical transitions. To these added lines, we have run diagnostics on the 4 lists for overlaps on transitions, frequencies, formulae and chemical names and have come up with a common way to display and designate each individual species. Splatalogue also contains atomic and recombination lines, template spectra, and is completely VO-compliant, queryable under the IVOA SLAP standard. The details of the database and how it will be used for the ALMA archive, observing tool and data reduction packages will be discussed.

  1. The Gene Expression Omnibus database

    PubMed Central

    Clough, Emily; Barrett, Tanya

    2016-01-01

    The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome–protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/. PMID:27008011

  2. The Gene Expression Omnibus Database.

    PubMed

    Clough, Emily; Barrett, Tanya

    2016-01-01

    The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome-protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/. PMID:27008011

  3. CMPD: cancer mutant proteome database.

    PubMed

    Huang, Po-Jung; Lee, Chi-Ching; Tan, Bertrand Chin-Ming; Yeh, Yuan-Ming; Julie Chu, Lichieh; Chen, Ting-Wen; Chang, Kai-Ping; Lee, Cheng-Yang; Gan, Ruei-Chi; Liu, Hsuan; Tang, Petrus

    2015-01-01

    Whole-exome sequencing, which centres on the protein coding regions of disease/cancer associated genes, represents the most cost-effective method to-date for deciphering the association between genetic alterations and diseases. Large-scale whole exome/genome sequencing projects have been launched by various institutions, such as NCI, Broad Institute and TCGA, to provide a comprehensive catalogue of coding variants in diverse tissue samples and cell lines. Further functional and clinical interrogation of these sequence variations must rely on extensive cross-platforms integration of sequencing information and a proteome database that explicitly and comprehensively archives the corresponding mutated peptide sequences. While such data resource is a critical for the mass spectrometry-based proteomic analysis of exomic variants, no database is currently available for the collection of mutant protein sequences that correspond to recent large-scale genomic data. To address this issue and serve as bridge to integrate genomic and proteomics datasets, CMPD (http://cgbc.cgu.edu.tw/cmpd) collected over 2 millions genetic alterations, which not only facilitates the confirmation and examination of potential cancer biomarkers but also provides an invaluable resource for translational medicine research and opportunities to identify mutated proteins encoded by mutated genes.

  4. CMPD: cancer mutant proteome database

    PubMed Central

    Huang, Po-Jung; Lee, Chi-Ching; Tan, Bertrand Chin-Ming; Yeh, Yuan-Ming; Julie Chu, Lichieh; Chen, Ting-Wen; Chang, Kai-Ping; Lee, Cheng-Yang; Gan, Ruei-Chi; Liu, Hsuan; Tang, Petrus

    2015-01-01

    Whole-exome sequencing, which centres on the protein coding regions of disease/cancer associated genes, represents the most cost-effective method to-date for deciphering the association between genetic alterations and diseases. Large-scale whole exome/genome sequencing projects have been launched by various institutions, such as NCI, Broad Institute and TCGA, to provide a comprehensive catalogue of coding variants in diverse tissue samples and cell lines. Further functional and clinical interrogation of these sequence variations must rely on extensive cross-platforms integration of sequencing information and a proteome database that explicitly and comprehensively archives the corresponding mutated peptide sequences. While such data resource is a critical for the mass spectrometry-based proteomic analysis of exomic variants, no database is currently available for the collection of mutant protein sequences that correspond to recent large-scale genomic data. To address this issue and serve as bridge to integrate genomic and proteomics datasets, CMPD (http://cgbc.cgu.edu.tw/cmpd) collected over 2 millions genetic alterations, which not only facilitates the confirmation and examination of potential cancer biomarkers but also provides an invaluable resource for translational medicine research and opportunities to identify mutated proteins encoded by mutated genes. PMID:25398898

  5. Pfam: the protein families database

    PubMed Central

    Finn, Robert D.; Bateman, Alex; Clements, Jody; Coggill, Penelope; Eberhardt, Ruth Y.; Eddy, Sean R.; Heger, Andreas; Hetherington, Kirstie; Holm, Liisa; Mistry, Jaina; Sonnhammer, Erik L. L.; Tate, John; Punta, Marco

    2014-01-01

    Pfam, available via servers in the UK (http://pfam.sanger.ac.uk/) and the USA (http://pfam.janelia.org/), is a widely used database of protein families, containing 14 831 manually curated entries in the current release, version 27.0. Since the last update article 2 years ago, we have generated 1182 new families and maintained sequence coverage of the UniProt Knowledgebase (UniProtKB) at nearly 80%, despite a 50% increase in the size of the underlying sequence database. Since our 2012 article describing Pfam, we have also undertaken a comprehensive review of the features that are provided by Pfam over and above the basic family data. For each feature, we determined the relevance, computational burden, usage statistics and the functionality of the feature in a website context. As a consequence of this review, we have removed some features, enhanced others and developed new ones to meet the changing demands of computational biology. Here, we describe the changes to Pfam content. Notably, we now provide family alignments based on four different representative proteome sequence data sets and a new interactive DNA search interface. We also discuss the mapping between Pfam and known 3D structures. PMID:24288371

  6. Data mining in forensic image databases

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien

    2002-07-01

    Forensic Image Databases appear in a wide variety. The oldest computer database is with fingerprints. Other examples of databases are shoeprints, handwriting, cartridge cases, toolmarks drugs tablets and faces. In these databases searches are conducted on shape, color and other forensic features. There exist a wide variety of methods for searching in images in these databases. The result will be a list of candidates that should be compared manually. The challenge in forensic science is to combine the information acquired. The combination of the shape of a partial shoe print with information on a cartridge case can result in stronger evidence. It is expected that searching in the combination of these databases with other databases (e.g. network traffic information) more crimes will be solved. Searching in image databases is still difficult, as we can see in databases of faces. Due to lighting conditions and altering of the face by aging, it is nearly impossible to find a right face from a database of one million faces in top position by a image searching method, without using other information. The methods for data mining in images in databases (e.g. MPEG-7 framework) are discussed, and the expectations of future developments are presented in this study.

  7. The PIR-International Protein Sequence Database.

    PubMed

    Barker, W C; Garavelli, J S; McGarvey, P B; Marzec, C R; Orcutt, B C; Srinivasarao, G Y; Yeh, L S; Ledley, R S; Mewes, H W; Pfeiffer, F; Tsugita, A; Wu, C

    1999-01-01

    The Protein Information Resource (PIR; http://www-nbrf.georgetown. edu/pir/) supports research on molecular evolution, functional genomics, and computational biology by maintaining a comprehensive, non-redundant, well-organized and freely available protein sequence database. Since 1988 the database has been maintained collaboratively by PIR-International, an international association of data collection centers cooperating to develop this resource during a period of explosive growth in new sequence data and new computer technologies. The PIR Protein Sequence Database entries are classified into superfamilies, families and homology domains, for which sequence alignments are available. Full-scale family classification supports comparative genomics research, aids sequence annotation, assists database organization and improves database integrity. The PIR WWW server supports direct on-line sequence similarity searches, information retrieval, and knowledge discovery by providing the Protein Sequence Database and other supplementary databases. Sequence entries are extensively cross-referenced and hypertext-linked to major nucleic acid, literature, genome, structure, sequence alignment and family databases. The weekly release of the Protein Sequence Database can be accessed through the PIR Web site. The quarterly release of the database is freely available from our anonymous FTP server and is also available on CD-ROM with the accompanying ATLAS database search program.

  8. Overview of selected molecular biological databases

    SciTech Connect

    Rayl, K.D.; Gaasterland, T.

    1994-11-01

    This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more complete ``user guides`` are included, along with general information on where to search for a new database.

  9. Interactive Database of Pulsar Flux Density Measurements

    NASA Astrophysics Data System (ADS)

    Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.

    2012-12-01

    The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.

  10. The Molecular Biology Database Collection: 2002 update

    PubMed Central

    Baxevanis, Andreas D.

    2002-01-01

    The Molecular Biology Database Collection is an online resource listing key databases of value to the biological community. This Collection is intended to bring fellow scientists’ attention to high-quality databases that are available throughout the world, rather than just be a lengthy listing of all available databases. As such, this up-to-date listing is intended to serve as the initial point from which to find specialized databases that may be of use in biological research. The databases included in this Collection provide new value to the underlying data by virtue of curation, new data connections or other innovative approaches. Short, searchable summaries and updates for each of the databases included in the Collection are available through the Nucleic Acids Research Web site at http://nar.oupjournals.org. PMID:11752241

  11. The Molecular Biology Database Collection: 2003 update

    PubMed Central

    Baxevanis, Andreas D.

    2003-01-01

    The Molecular Biology Database Collection is an online resource listing key databases of value to the biological community. This Collection is intended to bring fellow scientists' attention to high-quality databases that are available throughout the world, rather than just be a lengthy listing of all available databases. As such, this up-to-date listing is intended to serve as the jumping-off point from which to find specialized databases that may be of use in advancing biological research. The databases included in this Collection provide new value to the underlying data by virtue of curation, new data connections or other innovative approaches. Short, searchable summaries and updates for each of the databases included in this Collection are available through the Nucleic Acids Research Web site at http://nar.oupjournals.org. PMID:12519937

  12. Geologic map and map database of western Sonoma, northernmost Marin, and southernmost Mendocino counties, California

    USGS Publications Warehouse

    Blake, M.C.; Graymer, R.W.; Stamski, R.E.

    2002-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (wsomf.ps, wsomf.pdf, wsomf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  13. Nucleic Acids Research annual Database Issue and the NAR online Molecular Biology Database Collection in 2009

    PubMed Central

    Galperin, Michael Y.; Cochrane, Guy R.

    2009-01-01

    The current issue of Nucleic Acids Research includes descriptions of 179 databases, of which 95 are new. These databases (along with several molecular biology databases described in other journals) have been included in the Nucleic Acids Research online Molecular Biology Database Collection, bringing the total number of databases in the collection to 1170. In this introductory comment, we briefly describe some of these new databases and review the principles guiding the selection of databases for inclusion in the Nucleic Acids Research annual Database Issue and the Nucleic Acids Research online Molecular Biology Database Collection. The complete database list and summaries are available online at the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). PMID:19033364

  14. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, Shou-Hsuan Stephen

    1987-01-01

    The purpose of Distributed Access View Integrated Database (DAVID) interface module (Module 9: Resident Primitive Processing Package) is to provide data transfer between local DAVID systems and resident Data Base Management Systems (DBMSs). The result of current research is summarized. A detailed description of the interface module is provided. Several Pascal templates were constructed. The Resident Processor program was also developed. Even though it is designed for the Pascal templates, it can be modified for templates in other languages, such as C, without much difficulty. The Resident Processor itself can be written in any programming language. Since Module 5 routines are not ready yet, there is no way to test the interface module. However, simulation shows that the data base access programs produced by the Resident Processor do work according to the specifications.

  15. The National Geochemical Survey; database and documentation

    USGS Publications Warehouse

    ,

    2004-01-01

    The USGS, in collaboration with other federal and state government agencies, industry, and academia, is conducting the National Geochemical Survey (NGS) to produce a body of geochemical data for the United States based primarily on stream sediments, analyzed using a consistent set of methods. These data will compose a complete, national-scale geochemical coverage of the US, and will enable construction of geochemical maps, refine estimates of baseline concentrations of chemical elements in the sampled media, and provide context for a wide variety of studies in the geological and environmental sciences. The goal of the NGS is to analyze at least one stream-sediment sample in every 289 km2 area by a single set of analytical methods across the entire nation, with other solid sample media substituted where necessary. The NGS incorporates geochemical data from a variety of sources, including existing analyses in USGS databases, reanalyses of samples in USGS archives, and analyses of newly collected samples. At the present time, the NGS includes data covering ~71% of the land area of the US, including samples in all 50 states. This version of the online report provides complete access to NGS data, describes the history of the project, the methodology used, and presents preliminary geochemical maps for all analyzed elements. Future editions of this and other related reports will include the results of analysis of variance studies, as well as interpretive products related to the NGS data.

  16. Development of a national, dynamic reservoir-sedimentation database

    USGS Publications Warehouse

    Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.

    2010-01-01

    The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in

  17. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, E.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2012-04-01

    PEP725 is a 5 years project with the main object to promote and facilitate phenological research by delivering a pan European phenological database with an open, unrestricted data access for science, research and education. PEP725 is funded by EUMETNET (the network of European meteorological services), ZAMG and the Austrian ministry for science & research bm:w_f. So far 16 European national meteorological services and 7 partners from different nati-onal phenological network operators have joined PEP725. The data access is very easy via web-access from the homepage www.pep725.eu. Ha-ving accepted the PEP725 data policy and registry the data download can be done by different criteria as for instance the selection of a specific plant or all data from one country. At present more than 300 000 new records are available in the PEP725 data-base coming from 31 European countries and from 8150 stations. For some more sta-tions (154) META data (location and data holder) are provided. Links to the network operators and data owners are also on the webpage in case you have more sophisticated questions about the data. Another objective of PEP725 is to bring together network-operators and scientists by organizing workshops. In April 2012 the second of these workshops will take place on the premises of ZAMG. Invited speakers will give presentations spanning the whole study area of phenology starting from observations to modelling. Quality checking is also a big issue. At the moment we study the literature to find ap-propriate methods.

  18. Information persistence using XML database technology

    NASA Astrophysics Data System (ADS)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and

  19. The Pfam protein families database

    PubMed Central

    Finn, Robert D.; Mistry, Jaina; Tate, John; Coggill, Penny; Heger, Andreas; Pollington, Joanne E.; Gavin, O. Luke; Gunasekaran, Prasad; Ceric, Goran; Forslund, Kristoffer; Holm, Liisa; Sonnhammer, Erik L. L.; Eddy, Sean R.; Bateman, Alex

    2010-01-01

    Pfam is a widely used database of protein families and domains. This article describes a set of major updates that we have implemented in the latest release (version 24.0). The most important change is that we now use HMMER3, the latest version of the popular profile hidden Markov model package. This software is ∼100 times faster than HMMER2 and is more sensitive due to the routine use of the forward algorithm. The move to HMMER3 has necessitated numerous changes to Pfam that are described in detail. Pfam release 24.0 contains 11 912 families, of which a large number have been significantly updated during the past two years. Pfam is available via servers in the UK (http://pfam.sanger.ac.uk/), the USA (http://pfam.janelia.org/) and Sweden (http://pfam.sbc.su.se/). PMID:19920124

  20. Sandia Wind Turbine Loads Database

    DOE Data Explorer

    The Sandia Wind Turbine Loads Database is divided into six files, each corresponding to approximately 16 years of simulation. The files are text files with data in columnar format. The 424MB zipped file containing six data files can be downloaded by the public. The files simulate 10-minute maximum loads for the NREL 5MW wind turbine. The details of the loads simulations can be found in the paper: “Decades of Wind Turbine Loads Simulations”, M. Barone, J. Paquette, B. Resor, and L. Manuel, AIAA2012-1288 (3.69MB PDF). Note that the site-average wind speed is 10 m/s (class I-B), not the 8.5 m/s reported in the paper.

  1. Catalog of databases and reports

    SciTech Connect

    Burtis, M.D.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy`s (DOE`s) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications.

  2. The Comprehensive Antibiotic Resistance Database

    PubMed Central

    McArthur, Andrew G.; Waglechner, Nicholas; Nizam, Fazmin; Yan, Austin; Azad, Marisa A.; Baylay, Alison J.; Bhullar, Kirandeep; Canova, Marc J.; De Pascale, Gianfranco; Ejim, Linda; Kalan, Lindsay; King, Andrew M.; Koteva, Kalinka; Morar, Mariya; Mulvey, Michael R.; O'Brien, Jonathan S.; Pawlowski, Andrew C.; Piddock, Laura J. V.; Spanogiannopoulos, Peter; Sutherland, Arlene D.; Tang, Irene; Taylor, Patricia L.; Thaker, Maulik; Wang, Wenliang; Yan, Marie; Yu, Tennison

    2013-01-01

    The field of antibiotic drug discovery and the monitoring of new antibiotic resistance elements have yet to fully exploit the power of the genome revolution. Despite the fact that the first genomes sequenced of free living organisms were those of bacteria, there have been few specialized bioinformatic tools developed to mine the growing amount of genomic data associated with pathogens. In particular, there are few tools to study the genetics and genomics of antibiotic resistance and how it impacts bacterial populations, ecology, and the clinic. We have initiated development of such tools in the form of the Comprehensive Antibiotic Research Database (CARD; http://arpcard.mcmaster.ca). The CARD integrates disparate molecular and sequence data, provides a unique organizing principle in the form of the Antibiotic Resistance Ontology (ARO), and can quickly identify putative antibiotic resistance genes in new unannotated genome sequences. This unique platform provides an informatic tool that bridges antibiotic resistance concerns in health care, agriculture, and the environment. PMID:23650175

  3. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yuzuru

    CD-ROM is remarked as an epoch-making medium because of its advantages such as large capacity, compact size, mass reproducibility, read only memory and cost performance ratio. Some of big dictionaries and online databases have been converted to CD-ROM versions so far, however, information of publication or machine parts are converted recently. Moreover various CD-ROM-aided products such as support system for R&D, decision making and so on are being turned out. Still there remain many problems on sophisticated utilization of CD-ROM and distributive machinery of information. Author reviews this mini-series and describes the prospects of development of CD-ROM.

  4. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Nagatsuka, Takashi

    This paper introduces the CD-ROM-aided products and their utilization in foreign countries, mainly in U.S.A. CD-ROM is being used in various fields recently. Author classified its products into four groups:1. CD-ROM that substitutes for printed matters such as encyclopedias and dictionaries (ex. Grolier's Electronic Encyclopedia), 2. CD-ROM that substitutes for online databases (ex. Disclosure, Medline), 3. CD-ROM that has some functions such as giving orders for books besides information retrieval (ex. Books in Print Plus), 4. CD-ROM that contains literatures including pictures and figures (ex. ADONIS). The future trends of CD-ROM utilization are also suggested.

  5. SRS Waste Characterization System Database

    SciTech Connect

    Hester, J.R.

    2000-03-02

    Information relating to the contents of Savannah River Site high level waste tanks is available from numerous and scattered sources. The Waste Characterization System (WCS) database consolidates this information and provides for the orderly weighing of contradictory information. WCS is a computerized system that tracks the inventory of selected chemicals and radionuclides in high level waste tanks. WCS resides on an SRS local network server. WCS inventories cover approximately ninety non-radioactive chemicals and forty radionuclides and are based on concentration estimates from sample analyses, process histories, composition studies, and theoretical relationships. The intent of WCS is to consolidate waste characterization information, which now exists in multiple locations, and to estimate the most likely values of waste constituent inventories, which are based on the best-estimate compositions.

  6. The Majorana Parts Tracking Database

    SciTech Connect

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O׳Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radiopurity required for this rare decay search.

  7. Models And Results Database System.

    2001-03-27

    Version 00 MAR-D 4.16 is a program that is used primarily for Probabilistic Risk Assessment (PRA) data loading. This program defines a common relational database structure that is used by other PRA programs. This structure allows all of the software to access and manipulate data created by other software in the system without performing a lengthy conversion. The MAR-D program also provides the facilities for loading and unloading of PRA data from the relational databasemore » structure used to store the data to an ASCII format for interchange with other PRA software. The primary function of MAR-D is to create a data repository for NUREG-1150 and other permanent data by providing input, conversion, and output capabilities for data used by IRRAS, SARA, SETS and FRANTIC.« less

  8. The NIST Quantitative Infrared Database

    PubMed Central

    Chu, P. M.; Guenther, F. R.; Rhoderick, G. C.; Lafferty, W. J.

    1999-01-01

    With the recent developments in Fourier transform infrared (FTIR) spectrometers it is becoming more feasible to place these instruments in field environments. As a result, there has been enormous increase in the use of FTIR techniques for a variety of qualitative and quantitative chemical measurements. These methods offer the possibility of fully automated real-time quantitation of many analytes; therefore FTIR has great potential as an analytical tool. Recently, the U.S. Environmental Protection Agency (U.S.EPA) has developed protocol methods for emissions monitoring using both extractive and open-path FTIR measurements. Depending upon the analyte, the experimental conditions and the analyte matrix, approximately 100 of the hazardous air pollutants (HAPs) listed in the 1990 U.S.EPA Clean Air Act amendment (CAAA) can be measured. The National Institute of Standards and Technology (NIST) has initiated a program to provide quality-assured infrared absorption coefficient data based on NIST prepared primary gas standards. Currently, absorption coefficient data has been acquired for approximately 20 of the HAPs. For each compound, the absorption coefficient spectrum was calculated using nine transmittance spectra at 0.12 cm−1 resolution and the Beer’s law relationship. The uncertainties in the absorption coefficient data were estimated from the linear regressions of the transmittance data and considerations of other error sources such as the nonlinear detector response. For absorption coefficient values greater than 1 × 10−4 μmol/mol)−1 m−1 the average relative expanded uncertainty is 2.2 %. This quantitative infrared database is currently an ongoing project at NIST. Additional spectra will be added to the database as they are acquired. Our current plans include continued data acquisition of the compounds listed in the CAAA, as well as the compounds that contribute to global warming and ozone depletion.

  9. Who's Gonna Pay the Piper for Free Online Databases?

    ERIC Educational Resources Information Center

    Jacso, Peter

    1996-01-01

    Discusses new pricing models for some online services and considers the possibilities for the traditional online database market. Topics include multimedia music databases, including copyright implications; other retail-oriented databases; and paying for free databases with advertising. (LRW)

  10. A comprehensive database of Martian landslides

    NASA Astrophysics Data System (ADS)

    Battista Crosta, Giovanni; Vittorio De Blasio, Fabio; Frattini, Paolo; Valbuzzi, Elena

    2016-04-01

    During a long-term project, we have identified and classified a large number (> 3000) of Martian landslides especially but not exclusively from Valles Marineris. This database provides a more complete basis for a statistical study of landslides on Mars and its relationship with geographical and environmental conditions. Landslides have been mapped according to standard geomorphological criteria, delineating both the landslide scar and accumulation limits, associating each scarp to a deposit, and using the program ArcGis for generation of a complete digital dataset. Multiple accumulations from the same source area or from different sources have been differentiated, where possible, to obtain a more complete dataset and to allow more refined analyses. Each landslide has been classified according to a set of criteria including: type, degree of confinement, possible trigger, elevation with respect to datum, geomorphological features, degree of multiplicity, and so on. The runout, fall height, and volume have been measured for each deposit. In fact, the database is revealing a series of trends that may assist at understanding landform processes on Mars and its past climatic conditions. One of the most interesting aspects of our dataset is the presence of a population of landslides whose particularly long mobility deviates from average behavior. While some landslides have travelled unimpeded on a usually flat area, others have travelled against obstacles or mounds. Therefore, landslides are also studied in relation to i) morphologies created by the landslide itself, ii) presence of mounds, barriers or elevations than have affected the movement of the landslide mass. In some extreme cases, the landslide was capable of travelling for several tens of km along the whole valley and upon reaching the opposite side it travelled upslope for several hundreds of meters, which is indication of high travelling speed. In other cases, the high speed is revealed by dynamic deformations

  11. BIOSPIDA: A Relational Database Translator for NCBI.

    PubMed

    Hagen, Matthew S; Lee, Eva K

    2010-01-01

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013

  12. Hydrologic database user`s manual

    SciTech Connect

    Champman, J.B.; Gray, K.J.; Thompson, C.B.

    1993-09-01

    The Hydrologic Database is an electronic filing cabinet containing water-related data for the Nevada Test Site (NTS). The purpose of the database is to enhance research on hydrologic issues at the NTS by providing efficient access to information gathered by a variety of scientists. Data are often generated for specific projects and are reported to DOE in the context of specific project goals. The originators of the database recognized that much of this information has a general value that transcends project-specific requirements. Allowing researchers access to information generated by a wide variety of projects can prevent needless duplication of data-gathering efforts and can augment new data collection and interpretation. In addition, collecting this information in the database ensures that the results are not lost at the end of discrete projects as long as the database is actively maintained. This document is a guide to using the database.

  13. Database Research for Pediatric Infectious Diseases.

    PubMed

    Kronman, Matthew P; Gerber, Jeffrey S; Newland, Jason G; Hersh, Adam L

    2015-06-01

    Multiple electronic and administrative databases are available for the study of pediatric infectious diseases. In this review, we identify research questions well suited to investigations using these databases and highlight their advantages, including their relatively low cost, efficiency, and ability to detect rare outcomes. We discuss important limitations, including those inherent in observational study designs and the potential for misclassification of exposures and outcomes, and identify strategies for addressing these limitations. We provide examples of commonly used databases and discuss methodologic considerations in undertaking studies using large databases. Last, we propose a checklist for use in planning or evaluating studies of pediatric infectious diseases that employ electronic databases, and we outline additional practical considerations regarding the cost of and how to access commonly used databases. PMID:26407414

  14. Comparative Analyses of Plant Transcription Factor Databases

    PubMed Central

    Ramirez, Silvia R; Basu, Chhandak

    2009-01-01

    Transcription factors (TFs) are proteinaceous complex, which bind to the promoter regions in the DNA and affect transcription initiation. Plant TFs control gene expressions and genes control many physiological processes, which in turn trigger cascades of biochemical reactions in plant cells. The databases available for plant TFs are somewhat abundant but all convey different information and in different formats. Some of the publicly available plant TF databases may be narrow, while others are broad in scopes. For example, some of the best TF databases are ones that are very specific with just one plant species, but there are also other databases that contain a total of up to 20 different plant species. In this review plant TF databases ranging from a single species to many will be assessed and described. The comparative analyses of all the databases and their advantages and disadvantages are also discussed. PMID:19721806

  15. bioDBnet: the biological database network

    PubMed Central

    Mudunuri, Uma; Che, Anney; Yi, Ming; Stephens, Robert M.

    2009-01-01

    Summary: bioDBnet is an online web resource that provides interconnected access to many types of biological databases. It has integrated many of the most commonly used biological databases and in its current state has 153 database identifiers (nodes) covering all aspects of biology including genes, proteins, pathways and other biological concepts. bioDBnet offers various ways to work with these databases including conversions, extensive database reports, custom navigation and has various tools to enhance the quality of the results. Importantly, the access to bioDBnet is updated regularly, providing access to the most recent releases of each individual database. Availability: http://biodbnet.abcc.ncifcrf.gov Contact: stephensr@mail.nih.gov Supplementary information: Supplementary data are available at Bioinformatics online PMID:19129209

  16. Nuclear Concrete Materials Database Phase I Development

    SciTech Connect

    Ren, Weiju; Naus, Dan J

    2012-05-01

    The FY 2011 accomplishments in Phase I development of the Nuclear Concrete Materials Database to support the Light Water Reactor Sustainability Program are summarized. The database has been developed using the ORNL materials database infrastructure established for the Gen IV Materials Handbook to achieve cost reduction and development efficiency. In this Phase I development, the database has been successfully designed and constructed to manage documents in the Portable Document Format generated from the Structural Materials Handbook that contains nuclear concrete materials data and related information. The completion of the Phase I database has established a solid foundation for Phase II development, in which a digital database will be designed and constructed to manage nuclear concrete materials data in various digitized formats to facilitate electronic and mathematical processing for analysis, modeling, and design applications.

  17. Heterogeneous distributed databases: A case study

    NASA Technical Reports Server (NTRS)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  18. Synthesized Population Databases: A Geospatial Database of US Poultry Farms

    PubMed Central

    Bruhn, Mark C.; Munoz, Breda; Cajka, James; Smith, Gary; Curry, Ross J.; Wagener, Diane K.; Wheaton, William D.

    2013-01-01

    The pervasive and potentially severe economic, social, and public health consequences of infectious disease in farmed animals require that plans be in place for a rapid response. Increasingly, agent-based models are being used to analyze the spread of animal-borne infectious disease outbreaks and derive policy alternatives to control future outbreaks. Although the locations, types, and sizes of animal farms are essential model inputs, no public domain nationwide geospatial database of actual farm locations and characteristics currently exists in the United States. This report describes a novel method to develop a synthetic dataset that replicates the spatial distribution of poultry farms, as well as the type and number of birds raised on them. It combines county-aggregated poultry farm counts, land use/land cover, transportation, business, and topographic data to generate locations in the conterminous United States where poultry farms are likely to be found. Simulation approaches used to evaluate the accuracy of this method when compared to that of a random placement alternative found this method to be superior. The results suggest the viability of adapting this method to simulate other livestock farms of interest to infectious disease researchers. PMID:25364787

  19. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA

  20. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  1. An automated system for terrain database construction

    NASA Technical Reports Server (NTRS)

    Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.

    1987-01-01

    An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.

  2. Spectroscopic data for an astronomy database

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, Peter L.

    1995-01-01

    Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.

  3. A database for coconut crop improvement

    PubMed Central

    Rajagopal, Velamoor; Manimekalai, Ramaswamy; Devakumar, Krishnamurthy; Rajesh; Karun, Anitha; Niral, Vittal; Gopal, Murali; Aziz, Shamina; Gunasekaran, Marimuthu; Kumar, Mundappurathe Ramesh; Chandrasekar, Arumugam

    2005-01-01

    Coconut crop improvement requires a number of biotechnology and bioinformatics tools. A database containing information on CG (coconut germplasm), CCI (coconut cultivar identification), CD (coconut disease), MIFSPC (microbial information systems in plantation crops) and VO (vegetable oils) is described. The database was developed using MySQL and PostgreSQL running in Linux operating system. The database interface is developed in PHP, HTML and JAVA. Availability http://www.bioinfcpcri.org PMID:17597858

  4. Optics Toolbox: An Intelligent Relational Database System For Optical Designers

    NASA Astrophysics Data System (ADS)

    Weller, Scott W.; Hopkins, Robert E.

    1986-12-01

    Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO < 2.0 and FOV of 28 degrees ± 5% Again for the purpose of this discussion, we will define an expert system as a program which contains expert knowledge, can ask intelligent questions, and can form conclusions based on the answers given and the knowledge which it contains. Most expert systems store this knowledge in the form of rules-of-thumb, which are written in an English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert

  5. General database for ground water site information.

    PubMed

    de Dreuzy, Jean-Raynald; Bodin, Jacques; Le Grand, Hervé; Davy, Philippe; Boulanger, Damien; Battais, Annick; Bour, Olivier; Gouze, Philippe; Porel, Gilles

    2006-01-01

    In most cases, analysis and modeling of flow and transport dynamics in ground water systems require long-term, high-quality, and multisource data sets. This paper discusses the structure of a multisite database (the H+ database) developed within the scope of the ERO program (French Environmental Research Observatory, http://www.ore.fr). The database provides an interface between field experimentalists and modelers, which can be used on a daily basis. The database structure enables the storage of a large number of data and data types collected from a given site or multiple-site network. The database is well suited to the integration, backup, and retrieval of data for flow and transport modeling in heterogeneous aquifers. It relies on the definition of standards and uses a templated structure, such that any type of geolocalized data obtained from wells, hydrological stations, and meteorological stations can be handled. New types of platforms other than wells, hydrological stations, and meteorological stations, and new types of experiments and/or parameters could easily be added without modifying the database structure. Thus, we propose that the database structure could be used as a template for designing databases for complex sites. An example application is the H+ database, which gathers data collected from a network of hydrogeological sites associated with the French Environmental Research Observatory.

  6. Construction of Database for Pulsating Variable Stars

    NASA Astrophysics Data System (ADS)

    Chen, B. Q.; Yang, M.; Jiang, B. W.

    2011-07-01

    A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.

  7. Applicability of large databases in outcomes research.

    PubMed

    Malay, Sunitha; Shauver, Melissa J; Chung, Kevin C

    2012-07-01

    Outcomes research serves as a mechanism to assess the quality of care, cost effectiveness of treatment, and other aspects of health care. The use of administrative databases in outcomes research is increasing in all medical specialties, including hand surgery. However, the real value of databases can be maximized with a thorough understanding of their contents, advantages, and limitations. We performed a literature review pertaining to databases in medical, surgical, and epidemiologic research, with special emphasis on orthopedic and hand surgery. This article provides an overview of the available database resources for outcomes research, their potential value to hand surgeons, and suggestions to improve their effective use. PMID:22522104

  8. DBGC: A Database of Human Gastric Cancer.

    PubMed

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do. PMID:26566288

  9. DEPOT database: Reference manual and user's guide

    SciTech Connect

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information.

  10. Applicability of large databases in outcomes research.

    PubMed

    Malay, Sunitha; Shauver, Melissa J; Chung, Kevin C

    2012-07-01

    Outcomes research serves as a mechanism to assess the quality of care, cost effectiveness of treatment, and other aspects of health care. The use of administrative databases in outcomes research is increasing in all medical specialties, including hand surgery. However, the real value of databases can be maximized with a thorough understanding of their contents, advantages, and limitations. We performed a literature review pertaining to databases in medical, surgical, and epidemiologic research, with special emphasis on orthopedic and hand surgery. This article provides an overview of the available database resources for outcomes research, their potential value to hand surgeons, and suggestions to improve their effective use.

  11. DBGC: A Database of Human Gastric Cancer.

    PubMed

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do.

  12. TWRS technical baseline database manager definition document

    SciTech Connect

    Acree, C.D.

    1997-08-13

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.

  13. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  14. A global, open-source database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen; Jongman, Brenden; Bouwer, Laurens; Winsemius, Hessel; de Moel, Hans; Ward, Philip

    2016-04-01

    Accurate flood risk estimation is pivotal in that it enables risk-informed policies in disaster risk reduction, as emphasized in the recent Sendai framework for Disaster Risk Reduction. To improve our understanding of flood risk, models are now capable to provide actionable risk information on the (sub)global scale. Still the accuracy of their results is greatly limited by the lack of information on standards of protection to flood that are actually in place; and researchers thus take large assumptions on the extent of protection. With our work we propose a first global, open-source database of FLOod PROtection Standards, FLOPROS, covering a range of spatial scales. FLOPROS is structured in three layers of information, and merges them into one consistent database: 1) the Design layer contains empirical information about the standard of protection presently in place; 2) the Policy layer contains intended protection standards from normative documents; 3) the Model layer uses a validated numerical approach to calculate protection standards for areas not covered in the other layers. The FLOPROS database can be used for more accurate risk assessment exercises across scales. As the database should be continually updated to reflect new interventions, we invite researchers and practitioners to contribute information. Further, we look for partners within the risk community to participate in additional strategies to implement the amount and accuracy of information contained in this first version of FLOPROS.

  15. Database Relation Watermarking Resilient against Secondary Watermarking Attacks

    NASA Astrophysics Data System (ADS)

    Gupta, Gaurav; Pieprzyk, Josef

    There has been tremendous interest in watermarking multimedia content during the past two decades, mainly for proving ownership and detecting tamper. Digital fingerprinting, that deals with identifying malicious user(s), has also received significant attention. While extensive work has been carried out in watermarking of images, other multimedia objects still have enormous research potential. Watermarking database relations is one of the several areas which demand research focus owing to the commercial implications of database theft. Recently, there has been little progress in database watermarking, with most of the watermarking schemes modeled after the irreversible database watermarking scheme proposed by Agrawal and Kiernan. Reversibility is the ability to re-generate the original (unmarked) relation from the watermarked relation using a secret key. As explained in our paper, reversible watermarking schemes provide greater security against secondary watermarking attacks, where an attacker watermarks an already marked relation in an attempt to erase the original watermark. This paper proposes an improvement over the reversible and blind watermarking scheme presented in [5], identifying and eliminating a critical problem with the previous model. Experiments showing that the average watermark detection rate is around 91% even with attacker distorting half of the attributes. The current scheme provides security against secondary watermarking attacks.

  16. Windshear database for forward-looking systems certification

    NASA Technical Reports Server (NTRS)

    Switzer, G. F.; Proctor, F. H.; Hinton, D. A.; Aanstoos, J. V.

    1993-01-01

    This document contains a description of a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-look windshear detection equipment. The database contains high resolution, three dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies which have been generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases represent a wide spectrum of windshear events. General descriptions and figures from each of the case studies are included, as well as equations for F-factor, radar-reflectivity factor, and rainfall rate. The document also describes scenarios and paths through the data sets, jointly developed by NASA and the FAA, to meet FAA certification testing objectives. Instructions for reading and verifying the data from tape are included.

  17. East-China Geochemistry Database (ECGD):A New Networking Database for North China Craton

    NASA Astrophysics Data System (ADS)

    Wang, X.; Ma, W.

    2010-12-01

    North China Craton is one of the best natural laboratories that research some Earth Dynamic questions[1]. Scientists made much progress in research on this area, and got vast geochemistry data, which are essential for answering many fundamental questions about the age, composition, structure, and evolution of the East China area. But the geochemical data have long been accessible only through the scientific literature and theses where they have been widely dispersed, making it difficult for the broad Geosciences community to find, access and efficiently use the full range of available data[2]. How to effectively store, manage, share and reuse the existing geochemical data in the North China Craton area? East-China Geochemistry Database(ECGD) is a networking geochemical scientific database system that has been designed based on WebGIS and relational database for the structured storage and retrieval of geochemical data and geological map information. It is integrated the functions of data retrieval, spatial visualization and online analysis. ECGD focus on three areas: 1.Storage and retrieval of geochemical data and geological map information. Research on the characters of geochemical data, including its composing and connecting of each other, we designed a relational database, which based on geochemical relational data model, to store a variety of geological sample information such as sampling locality, age, sample characteristics, reference, major elements, rare earth elements, trace elements and isotope system et al. And a web-based user-friendly interface is provided for constructing queries. 2.Data view. ECGD is committed to online data visualization by different ways, especially to view data in digital map with dynamic way. Because ECGD was integrated WebGIS technology, the query results can be mapped on digital map, which can be zoomed, translation and dot selection. Besides of view and output query results data by html, txt or xls formats, researchers also can

  18. Database for the east half of "Preliminary Geologic Map of the Blythe 30' by 60' quadrangle, California and Arizona"

    USGS Publications Warehouse

    Stone, Paul

    2006-01-01

    This digital map database was prepared from the published Preliminary Geologic Map of the Blythe 30' by 60' Quadrangle, California and Arizona (U.S. Geological Survey Open-File Report 90-497). This database represents the east half of the original published map. The database contains exactly the same scientific content as the original map; no data have been added to, subtracted from, or modified from the original map. Like the original paper map, this database represents the general distribution of bedrock and surficial deposits in the mapped area. It provides information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the original published map limits the spatial resolution (scale) of the database to 1:100,000 or smaller.

  19. Landslide databases for applied landslide impact research: the example of the landslide database for the Federal Republic of Germany

    NASA Astrophysics Data System (ADS)

    Damm, Bodo; Klose, Martin

    2014-05-01

    This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system

  20. BSDB: the Biomolecule Stretching Database

    NASA Astrophysics Data System (ADS)

    Cieplak, Marek; Sikora, Mateusz; Sulkowska, Joanna I.; Witkowski, Bartlomiej

    2011-03-01

    Despite more than a decade of experiments on single biomolecule manipulation, mechanical properties of only several scores of proteins have been measured. A characteristic scale of the force of resistance to stretching, Fmax , has been found to range between ~ 10 and 480 pN. The Biomolecule Stretching Data Base (BSDB) described here provides information about expected values of Fmax for, currently, 17 134 proteins. The values and other characteristics of the unfolding proces, including the nature of identified mechanical clamps, are available at www://info.ifpan.edu.pl/BSDB/. They have been obtained through simulations within a structure-based model which correlates satisfactorily with the available experimental data on stretching. BSDB also lists experimental data and results of the existing all-atom simulations. The database offers a Protein-Data-Bank-wide guide to mechano-stability of proteins. Its description is provided by a forthcoming Nucleic Acids Research paper. Supported by EC FUNMOL project FP7-NMP-2007-SMALL-1, and European Regional Development Fund: Innovative Economy (POIG.01.01.02-00-008/08).

  1. Web interfaces to relational databases

    NASA Technical Reports Server (NTRS)

    Carlisle, W. H.

    1996-01-01

    This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.

  2. National Spill Test Technology Database

    DOE Data Explorer

    Sheesley, David [Western Research Institute

    Western Research Institute established, and ACRC continues to maintain, the National Spill Technology database to provide support to the Liquified Gaseous Fuels Spill Test Facility (now called the National HAZMAT Spill Center) as directed by Congress in Section 118(n) of the Superfund Amendments and Reauthorization Act of 1986 (SARA). The Albany County Research Corporation (ACRC) was established to make publicly funded data developed from research projects available to benefit public safety. The founders since 1987 have been investigating the behavior of toxic chemicals that are deliberately or accidentally spilled, educating emergency response organizations, and maintaining funding to conduct the research at the DOEÆs HAZMAT Spill Center (HSC) located on the Nevada Test Site. ACRC also supports DOE in collaborative research and development efforts mandated by Congress in the Clean Air Act Amendments. The data files are results of spill tests conducted at various times by the Silicones Environmental Health and Safety Council (SEHSC) and DOE, ANSUL, Dow Chemical, the Center for Chemical Process Safety (CCPS) and DOE, Lawrence Livermore National Laboratory (LLNL), OSHA, and DOT; DuPont, and the Western Research Institute (WRI), Desert Research Institute (DRI), and EPA. Each test data page contains one executable file for each test in the test series as well as a file named DOC.EXE that contains information documenting the test series. These executable files are actually self-extracting zip files that, when executed, create one or more comma separated value (CSV) text files containing the actual test data or other test information.

  3. Wide-Field Plate Database

    NASA Astrophysics Data System (ADS)

    Tsvetkov, M. K.; Stavrev, K. Y.; Tsvetkova, K. P.; Semkov, E. H.; Mutatov, A. S.

    The Wide-Field Plate Database (WFPDB) and the possibilities for its application as a research tool in observational astronomy are presented. Currently the WFPDB comprises the descriptive data for 400 000 archival wide field photographic plates obtained with 77 instruments, from a total of 1 850 000 photographs stored in 269 astronomical archives all over the world since the end of last century. The WFPDB is already accessible for the astronomical community, now only in batch mode through user requests sent by e-mail. We are working on on-line interactive access to the data via INTERNET from Sofia and parallel from the Centre de Donnees Astronomiques de Strasbourg. (Initial information can be found on World Wide Web homepage URL http://www.wfpa.acad.bg.) The WFPDB may be useful in studies of a variety of astronomical objects and phenomena, andespecially for long-term investigations of variable objects and for multi-wavelength research. We have analysed the data in the WFPDB in order to derive the overall characteristics of the totality of wide-field observations, such as the sky coverage, the distributions by observation time and date, by spectral band, and by object type. We have also examined the totality of wide-field observations from point of view of their quality, availability and digitisation. The usefulness of the WFPDB is demonstrated by the results of identification and investigation of the photometrical behaviour of optical analogues of gamma-ray bursts.

  4. PCMdb: Pancreatic Cancer Methylation Database

    NASA Astrophysics Data System (ADS)

    Nagpal, Gandharva; Sharma, Minakshi; Kumar, Shailesh; Chaudhary, Kumardeep; Gupta, Sudheer; Gautam, Ankur; Raghava, Gajendra P. S.

    2014-02-01

    Pancreatic cancer is the fifth most aggressive malignancy and urgently requires new biomarkers to facilitate early detection. For providing impetus to the biomarker discovery, we have developed Pancreatic Cancer Methylation Database (PCMDB, http://crdd.osdd.net/raghava/pcmdb/), a comprehensive resource dedicated to methylation of genes in pancreatic cancer. Data was collected and compiled manually from published literature. PCMdb has 65907 entries for methylation status of 4342 unique genes. In PCMdb, data was compiled for both cancer cell lines (53565 entries for 88 cell lines) and cancer tissues (12342 entries for 3078 tissue samples). Among these entries, 47.22% entries reported a high level of methylation for the corresponding genes while 10.87% entries reported low level of methylation. PCMdb covers five major subtypes of pancreatic cancer; however, most of the entries were compiled for adenocarcinomas (88.38%) and mucinous neoplasms (5.76%). A user-friendly interface has been developed for data browsing, searching and analysis. We anticipate that PCMdb will be helpful for pancreatic cancer biomarker discovery.

  5. The 2014 Nucleic Acids Research Database Issue and an updated NAR online Molecular Biology Database Collection

    PubMed Central

    Fernández-Suárez, Xosé M.; Rigden, Daniel J.; Galperin, Michael Y.

    2014-01-01

    The 2014 Nucleic Acids Research Database Issue includes descriptions of 58 new molecular biology databases and recent updates to 123 databases previously featured in NAR or other journals. For convenience, the issue is now divided into eight sections that reflect major subject categories. Among the highlights of this issue are six databases of the transcription factor binding sites in various organisms and updates on such popular databases as CAZy, Database of Genomic Variants (DGV), dbGaP, DrugBank, KEGG, miRBase, Pfam, Reactome, SEED, TCDB and UniProt. There is a strong block of structural databases, which includes, among others, the new RNA Bricks database, updates on PDBe, PDBsum, ArchDB, Gene3D, ModBase, Nucleic Acid Database and the recently revived iPfam database. An update on the NCBI’s MMDB describes VAST+, an improved tool for protein structure comparison. Two articles highlight the development of the Structural Classification of Proteins (SCOP) database: one describes SCOPe, which automates assignment of new structures to the existing SCOP hierarchy; the other one describes the first version of SCOP2, with its more flexible approach to classifying protein structures. This issue also includes a collection of articles on bacterial taxonomy and metagenomics, which includes updates on the List of Prokaryotic Names with Standing in Nomenclature (LPSN), Ribosomal Database Project (RDP), the Silva/LTP project and several new metagenomics resources. The NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/c/, has been expanded to 1552 databases. The entire Database Issue is freely available online on the Nucleic Acids Research website (http://nar.oxfordjournals.org/). PMID:24316579

  6. The 2014 Nucleic Acids Research Database Issue and an updated NAR online Molecular Biology Database Collection.

    PubMed

    Fernández-Suárez, Xosé M; Rigden, Daniel J; Galperin, Michael Y

    2014-01-01

    The 2014 Nucleic Acids Research Database Issue includes descriptions of 58 new molecular biology databases and recent updates to 123 databases previously featured in NAR or other journals. For convenience, the issue is now divided into eight sections that reflect major subject categories. Among the highlights of this issue are six databases of the transcription factor binding sites in various organisms and updates on such popular databases as CAZy, Database of Genomic Variants (DGV), dbGaP, DrugBank, KEGG, miRBase, Pfam, Reactome, SEED, TCDB and UniProt. There is a strong block of structural databases, which includes, among others, the new RNA Bricks database, updates on PDBe, PDBsum, ArchDB, Gene3D, ModBase, Nucleic Acid Database and the recently revived iPfam database. An update on the NCBI's MMDB describes VAST+, an improved tool for protein structure comparison. Two articles highlight the development of the Structural Classification of Proteins (SCOP) database: one describes SCOPe, which automates assignment of new structures to the existing SCOP hierarchy; the other one describes the first version of SCOP2, with its more flexible approach to classifying protein structures. This issue also includes a collection of articles on bacterial taxonomy and metagenomics, which includes updates on the List of Prokaryotic Names with Standing in Nomenclature (LPSN), Ribosomal Database Project (RDP), the Silva/LTP project and several new metagenomics resources. The NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/c/, has been expanded to 1552 databases. The entire Database Issue is freely available online on the Nucleic Acids Research website (http://nar.oxfordjournals.org/).

  7. Database Applications to Integrate Beam Line Optics Changes with the Engineering Databases

    SciTech Connect

    Chan, A.; Bellomo, P.; Crane, G.R.; Emma, P.; Grunhaus, E.; Luchini, K.; MacGregor, I.A.; Marsh, D.S.; Pope, R.; Prickett, P.; Rago, C.; Ratcliffe, K.; Shab, T.; /SLAC

    2007-07-06

    The LCLS project databases provide key nomenclature information while integrating many engineering and physics processes in the building of an accelerator. Starting with the elements existing in the beam line optics files, the engineers add non-beam-line elements, and controls engineers assign ''Formal Device Names'' to these elements. Inventory, power supplies, racks, crates and cable plants are databases that are being integrated into the project database. This approach replaces individual spreadsheets and/or integrates standalone existing institutional databases.

  8. An authoritative global database for active submarine hydrothermal vent fields

    NASA Astrophysics Data System (ADS)

    Beaulieu, Stace E.; Baker, Edward T.; German, Christopher R.; Maffei, Andrew

    2013-11-01

    The InterRidge Vents Database is available online as the authoritative reference for locations of active submarine hydrothermal vent fields. Here we describe the revision of the database to an open source content management system and conduct a meta-analysis of the global distribution of known active vent fields. The number of known active vent fields has almost doubled in the past decade (521 as of year 2009), with about half visually confirmed and others inferred active from physical and chemical clues. Although previously known mainly from mid-ocean ridges (MORs), active vent fields at MORs now comprise only half of the total known, with about a quarter each now known at volcanic arcs and back-arc spreading centers. Discoveries in arc and back-arc settings resulted in an increase in known vent fields within exclusive economic zones, consequently reducing the proportion known in high seas to one third. The increase in known vent fields reflects a number of factors, including increased national and commercial interests in seafloor hydrothermal deposits as mineral resources. The purpose of the database now extends beyond academic research and education and into marine policy and management, with at least 18% of known vent fields in areas granted or pending applications for mineral prospecting and 8% in marine protected areas.

  9. FLOPROS: A global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen; Jongman, Brenden; Bouwer, Laurens; Winsemius, Hessel; de Moel, Hans; Ward, Philip

    2016-04-01

    Flood risk is increasing due denser population and socioeconomic activity in flood-prone areas, and to ongoing changes in climate. As emphasized in the Sendai Framework for Disaster Risk Reduction, we need to improve understanding of risk for developing risk-informed policies in disaster risk reduction (priority 3). While (Sub)Global flood risk models provide applicable risk information, the accuracy of their results is greatly limited by the lack of information on standards of protection to flood currently in place. Studies therefore either neglect this aspect or apply crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, that includes information at different spatial scales. It comprises three layers of information, combining them into one consistent database: 1) the Design layer contains empirical information about the actual standard of protection in place; 2) the Policy layer contains intended protection standards from normative documents; 3) the Model layer uses a validated numerical approach to calculate protection standards for the areas otherwise not covered. FLOPROS can be used by entities conducting risk assessment across scales to produce more reliable results, and also to monitor progress in flood protection standards, as required by the Sendai Framework. We invite the risk community to participate in strategies to further extend and increase resolution and accuracy of this first version of FLOPROS. As the database should be continually updated to reflect new interventions, we invite researchers and practitioners to contribute information.

  10. Diet History Questionnaire II: Database Utility Program

    Cancer.gov

    If you need to modify the standard nutrient database, a single nutrient value must be provided by gender and portion size. If you have modified the database to have fewer or greater demographic groups, nutrient values must be included for each group.

  11. Diet History Questionnaire: Database Utility Program

    Cancer.gov

    If you need to modify the standard nutrient database, a single nutrient value must be provided by gender and portion size. If you have modified the database to have fewer or greater demographic groups, nutrient values must be included for each group.

  12. An Introduction to Database Management Systems.

    ERIC Educational Resources Information Center

    Warden, William H., III; Warden, Bette M.

    1984-01-01

    Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…

  13. Database Security: What Students Need to Know

    ERIC Educational Resources Information Center

    Murray, Meg Coffin

    2010-01-01

    Database security is a growing concern evidenced by an increase in the number of reported incidents of loss of or unauthorized exposure to sensitive data. As the amount of data collected, retained and shared electronically expands, so does the need to understand database security. The Defense Information Systems Agency of the US Department of…

  14. Challenges in Database Design with Microsoft Access

    ERIC Educational Resources Information Center

    Letkowski, Jerzy

    2014-01-01

    Design, development and explorations of databases are popular topics covered in introductory courses taught at business schools. Microsoft Access is the most popular software used in those courses. Despite quite high complexity of Access, it is considered to be one of the most friendly database programs for beginners. A typical Access textbook…

  15. Annual Review of Database Development: 1992.

    ERIC Educational Resources Information Center

    Basch, Reva

    1992-01-01

    Reviews recent trends in databases and online systems. Topics discussed include new access points for established databases; acquisitions, consolidations, and competition between vendors; European coverage; international services; online reference materials, including telephone directories; political and legal materials and public records;…

  16. BIBLIOGRAPHIC DATABASES IN SUPPORT OF NSDD EVALUATIONS.

    SciTech Connect

    BURROWS, T.

    2005-04-04

    Bibliographic databases useful to nuclear structure and decay data (NSDD) evaluators are briefly described, along with examples of their usage. Authors' reference listings are also discussed. Nuclear Science References is recognized as the major bibliographic resource, and therefore most of the presentation is devoted to this database.

  17. Annual Review of Database Developments: 1994.

    ERIC Educational Resources Information Center

    Basch, Reva

    1994-01-01

    This introduction to Database's annual review issue highlights trends covered in the 1991-93 reviews; lists 13 trends described as a continuation of the trends of the past 2 or 3 years; and discusses the World Wide Web and America Online's innovative development of a database that combines text, images, sound, and commentary. (KRN)

  18. Database Cancellation: The "Hows" and "Whys"

    ERIC Educational Resources Information Center

    Shapiro, Steven

    2012-01-01

    Database cancellation is one of the most difficult tasks performed by a librarian. This may seem counter-intuitive but, psychologically, it is certainly true. When a librarian or a team of librarians has invested a great deal of time doing research, talking to potential users, and conducting trials before deciding to subscribe to a database, they…

  19. 24 CFR 902.24 - Database adjustment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 4 2012-04-01 2012-04-01 false Database adjustment. 902.24 Section 902.24 Housing and Urban Development REGULATIONS RELATING TO HOUSING AND URBAN DEVELOPMENT (CONTINUED... PUBLIC HOUSING ASSESSMENT SYSTEM Physical Condition Indicator § 902.24 Database adjustment....

  20. Designing Corporate Databases to Support Technology Innovation

    ERIC Educational Resources Information Center

    Gultz, Michael Jarett

    2012-01-01

    Based on a review of the existing literature on database design, this study proposed a unified database model to support corporate technology innovation. This study assessed potential support for the model based on the opinions of 200 technology industry executives, including Chief Information Officers, Chief Knowledge Officers and Chief Learning…