Science.gov

Sample records for area database rogad

  1. Acquisition of CD-ROM Databases for Local Area Networks.

    ERIC Educational Resources Information Center

    Davis, Trisha L.

    1993-01-01

    Discusses the acquisition of CD-ROM products for local area networks based on experiences at the Ohio State University libraries. Topics addressed include the historical development of CD-ROM acquisitions; database selection, including pricing and subscription options; the ordering process; and network licensing issues. (six references) (LRW)

  2. Protocol for the E-Area Low Level Waste Facility Disposal Limits Database

    SciTech Connect

    Swingle, R

    2006-01-31

    A database has been developed to contain the disposal limits for the E-Area Low Level Waste Facility (ELLWF). This database originates in the form of an EXCEL{copyright} workbook. The pertinent sheets are translated to PDF format using Adobe ACROBAT{copyright}. The PDF version of the database is accessible from the Solid Waste Division web page on SHRINE. In addition to containing the various disposal unit limits, the database also contains hyperlinks to the original references for all limits. It is anticipated that database will be revised each time there is an addition, deletion or revision of any of the ELLWF radionuclide disposal limits.

  3. Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints

    ERIC Educational Resources Information Center

    Philip, George C.

    2007-01-01

    This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…

  4. Database of groundwater levels and hydrograph descriptions for the Nevada Test Site area, Nye County, Nevada

    USGS Publications Warehouse

    Elliott, Peggy E.; Fenelon, Joseph M.

    2010-01-01

    A database containing water levels measured from wells in and near areas of underground nuclear testing at the Nevada Test Site was developed. The water-level measurements were collected from 1941 to 2016. The database provides information for each well including well construction, borehole lithology, units contributing water to the well, and general site remarks. Water-level information provided in the database includes measurement source, status, method, accuracy, and specific water-level remarks. Additionally, the database provides hydrograph narratives that document the water-level history and describe and interpret the water-level hydrograph for each well.Water levels in the database were quality assured and analyzed. Multiple conditions were assigned to each water-level measurement to describe the hydrologic conditions at the time of measurement. General quality, temporal variability, regional significance, and hydrologic conditions are attributed to each water-level measurement.

  5. Conversion of environmental data to a digital-spatial database, Puget Sound area, Washington

    USGS Publications Warehouse

    Uhrich, M.A.; McGrath, T.S.

    1997-01-01

    Data and maps from the Puget Sound Environmental Atlas, compiled for the U.S. Environmental Protection Agency, the Puget Sound Water Quality Authority, and the U.S. Army Corps of Engineers, have been converted into a digital-spatial database using a geographic information system. Environmental data for the Puget Sound area,collected from sources other than the Puget SoundEnvironmental Atlas by different Federal, State, andlocal agencies, also have been converted into thisdigital-spatial database. Background on the geographic-information-system planning process, the design and implementation of the geographic information-system database, and the reasons for conversion to this digital-spatial database are included in this report. The Puget Sound Environmental Atlas data layers include information about seabird nesting areas, eelgrass and kelp habitat, marine mammal and fish areas, and shellfish resources and bed certification. Data layers, from sources other than the Puget Sound Environmental Atlas, include the Puget Sound shoreline, the water-body system, shellfish growing areas, recreational shellfish beaches, sewage-treatment outfalls, upland hydrography,watershed and political boundaries, and geographicnames. The sources of data, descriptions of the datalayers, and the steps and errors of processing associated with conversion to a digital-spatial database used in development of the Puget Sound Geographic Information System also are included in this report. The appendixes contain data dictionaries for each of the resource layers and error values for the conversion of Puget SoundEnvironmental Atlas data.

  6. Geologic map database of the El Mirage Lake area, San Bernardino and Los Angeles Counties, California

    USGS Publications Warehouse

    Miller, David M.; Bedford, David R.

    2000-01-01

    This geologic map database for the El Mirage Lake area describes geologic materials for the dry lake, parts of the adjacent Shadow Mountains and Adobe Mountain, and much of the piedmont extending south from the lake upward toward the San Gabriel Mountains. This area lies within the western Mojave Desert of San Bernardino and Los Angeles Counties, southeastern California. The area is traversed by a few paved highways that service the community of El Mirage, and by numerous dirt roads that lead to outlying properties. An off-highway vehicle area established by the Bureau of Land Management encompasses the dry lake and much of the land north and east of the lake. The physiography of the area consists of the dry lake, flanking mud and sand flats and alluvial piedmonts, and a few sharp craggy mountains. This digital geologic map database, intended for use at 1:24,000-scale, describes and portrays the rock units and surficial deposits of the El Mirage Lake area. The map database was prepared to aid in a water-resource assessment of the area by providing surface geologic information with which deepergroundwater-bearing units may be understood. The area mapped covers the Shadow Mountains SE and parts of the Shadow Mountains, Adobe Mountain, and El Mirage 7.5-minute quadrangles. The map includes detailed geology of surface and bedrock deposits, which represent a significant update from previous bedrock geologic maps by Dibblee (1960) and Troxel and Gunderson (1970), and the surficial geologic map of Ponti and Burke (1980); it incorporates a fringe of the detailed bedrock mapping in the Shadow Mountains by Martin (1992). The map data were assembled as a digital database using ARC/INFO to enable wider applications than traditional paper-product geologic maps and to provide for efficient meshing with other digital data bases prepared by the U.S. Geological Survey's Southern California Areal Mapping Project.

  7. Geothermal resource areas database for monitoring the progress of development in the United States

    NASA Astrophysics Data System (ADS)

    Lawrence, J. D.; Lepman, S. R.; Leung, K. N.; Phillips, S. L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described as well as the structure of the database.

  8. Geothermal resource areas database for monitoring the progress of development in the United States

    SciTech Connect

    Lawrence, J.D.; Lepman, S.R.; Leung, K.; Phillips, S.L.

    1981-01-01

    The Geothermal Resource Areas Database (GRAD) and associated data system provide broad coverage of information on the development of geothermal resources in the United States. The system is designed to serve the information requirements of the National Progress Monitoring System. GRAD covers development from the initial exploratory phase through plant construction and operation. Emphasis is on actual facts or events rather than projections and scenarios. The selection and organization of data are based on a model of geothermal development. Subjects in GRAD include: names and addresses, leases, area descriptions, geothermal wells, power plants, direct use facilities, and environmental and regulatory aspects of development. Data collected in the various subject areas are critically evaluated, and then entered into an on-line interactive computer system. The system is publically available for retrieval and use. The background of the project, conceptual development, software development, and data collection are described here. Appendices describe the structure of the database in detail.

  9. Database assessment of CMIP5 and hydrological models to determine flood risk areas

    NASA Astrophysics Data System (ADS)

    Limlahapun, Ponthip; Fukui, Hiromichi

    2016-11-01

    Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.

  10. A Sediment Testing Reference Area Database for the San Francisco Deep Ocean Disposal Site (SF-DODS)

    EPA Pesticide Factsheets

    EPA established and maintains a SF-DODS reference area database of previously-collected sediment test data. Several sets of sediment test data have been successfully collected from the SF-DODS reference area.

  11. Analysis on the flood vulnerability in the Seoul and Busan metropolitan area, Korea using spatial database

    NASA Astrophysics Data System (ADS)

    Lee, Mung-Jin

    2015-04-01

    In the future, temperature rises and precipitation increases are expected from climate change due to global warming. Concentrated heavy rain, typhoons, flooding, and other weather phenomena bring hydrologic variations. In this study, the flood susceptibility of the Seoul and Busan metropolitan area was analyzed and validated using a GIS based on a frequency ratio model and a logistic regression model with training and validation datasets of the flooded area. The flooded area in 2010 was used to train the model, and the flooded area in 2011 was used to validate the model. Using data is that topographic, geological, and soil data from the study areas were collected, processed, and digitized for use in a GIS. Maps relevant to the specific capacity were assembled in a spatial database. Then, flood susceptibility maps were created. Finally, the flood susceptibility maps were validated using the flooded area in 2011, which was not used for training. To represent the flood susceptible areas, this study used the probability-frequency ratio. The frequency ratio is the probability of occurrence of a certain attribute. Logistic regression allows for investigation of multivariate regression relations between one dependent and several independent variables. Logistic regression has a limit in that the calculation process cannot be traced because it repeats calculations to find the optimized regression equation for determining the possibility that the dependent variable will occur. In case of Seoul, The frequency ratio and logistic regression model results showed 79.61% and 79.05% accuracy. And the case of Busan, logistic regression model results showed 82.30%. This information and the maps generated from it could be applied to flood prevention and management. In addition, the susceptibility map provides meaningful information for decision-makers regarding priority areas for implementing flood mitigation policies.

  12. Database of well and areal data, South San Francisco Bay and Peninsula area, California

    USGS Publications Warehouse

    Leighton, D.A.; Fio, J.L.; Metzger, L.F.

    1995-01-01

    A database was developed to organize and manage data compiled for a regional assessment of geohydrologic and water-quality conditions in the south San Francisco Bay and Peninsula area in California. Available data provided by local, State, and Federal agencies and private consultants was utilized in the assessment. The database consists of geographicinformation system data layers and related tables and American Standard Code for Information Interchange files. Documentation of the database is necessary to avoid misinterpretation of the data and to make users aware of potential errors and limitations. Most of the data compiled were collected from wells and boreholes (collectively referred to as wells in this report). This point-specific data, including construction, water-level, waterquality, pumping test, and lithologic data, are contained in tables and files that are related to a geographic information system data layer that contains the locations of the wells. There are 1,014 wells in the data layer and the related tables contain 35,845 water-level measurements (from 293 of the wells) and 9,292 water-quality samples (from 394 of the wells). Calculation of hydraulic heads and gradients from the water levels can be affected adversely by errors in the determination of the altitude of land surface at the well. Cation and anion balance computations performed on 396 of the water-quality samples indicate high cation and anion balance errors for 51 (13 percent) of the samples. Well drillers' reports were interpreted for 762 of the wells, and digital representations of the lithology of the formations are contained in files following the American Standard Code for Information Interchange. The usefulness of drillers' descriptions of the formation lithology is affected by the detail and thoroughness of the drillers' descriptions, as well as the knowledge, experience, and vocabulary of the individual who described the drill cuttings. Additional data layers were created that

  13. Monitoring of equine health in Denmark: the importance, purpose, research areas and content of a future database.

    PubMed

    Hartig, Wendy; Houe, Hans; Andersen, Pia Haubro

    2013-04-01

    The plentiful data on Danish horses are currently neither organized nor easily accessible, impeding register-based epidemiological studies on Danish horses. A common database could be beneficial. In principle, databases can contain a wealth of information, but no single database can serve every purpose. Hence the establishment of a Danish equine health database should be preceded by careful consideration of its purpose and content, and stakeholder attitudes should be investigated. The objectives of the present study were to identify stakeholder attitudes to the importance, purpose, research areas and content of a health database for horses in Denmark. A cross-sectional study was conducted with 13 horse-related stakeholder groups in Denmark. The groups surveyed included equine veterinarians, researchers, veterinary students, representatives from animal welfare organizations, horse owners, trainers, farriers, authority representatives, ordinary citizens, and representatives from laboratories, insurance companies, medical equipment companies and pharmaceutical companies. Supplementary attitudes were inferred from qualitative responses. The overall response rate for all stakeholder groups was 45%. Stakeholder group-specific response rates were 27-80%. Sixty-eight percent of questionnaire respondents thought a national equine health database was important. Most respondents wanted the database to contribute to improved horse health and welfare, to be used for research into durability and disease heritability, and to serve as a basis for health declarations for individual horses. The generally preferred purpose of the database was thus that it should focus on horse health and welfare rather than on performance or food safety, and that it should be able to function both at a population and an individual horse level. In conclusion, there is a positive attitude to the establishment of a health database for Danish horses. These results could enrich further reflection on the

  14. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness.

    PubMed

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D; Hockings, Marc

    2015-11-05

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible.

  15. Measuring impact of protected area management interventions: current and future use of the Global Database of Protected Area Management Effectiveness

    PubMed Central

    Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D.; Hockings, Marc

    2015-01-01

    Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133

  16. The construction and periodicity analysis of natural disaster database of Alxa area based on Chinese local records

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Mingzhong, Tian; Hengli, Wang

    2010-05-01

    Chinese hand-written local records were originated from the first century. Generally, these local records include geography, evolution, customs, education, products, people, historical sites, as well as writings of an area. Through such endeavors, the information of the natural materials of China nearly has had no "dark ages" in the evolution of its 5000-year old civilization. A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia, the second largest desert in China, is used here for the construction of a 500-year high resolution database. The database is divided into subsets according to the types of natural-disasters like sand-dust storm, drought events, cold wave, etc. Through applying trend, correlation, wavelet, and spectral analysis on these data, we can estimate the statistically periodicity of different natural-disasters, detect and quantify similarities and patterns of the periodicities of these records, and finally take these results in aggregate to find a strong and coherent cyclicity through the last 500 years which serves as the driving mechanism of these geological hazards. Based on the periodicity obtained from the above analysis, the paper discusses the probability of forecasting natural-disasters and the suitable measures to reduce disaster losses through history records. Keyword: Chinese local records; Alxa; natural disasters; database; periodicity analysis

  17. Soil Characterization Database for the Area 5 Radioactive Waste Management Site, Nevada Test Site, Nye County, Nevada

    SciTech Connect

    Y. J. Lee; R. D. Van Remortel; K. E. Snyder

    2005-01-01

    Soils were characterized in an investigation at the Area 5 Radioactive Waste Management Site at the U.S. Department of Energy Nevada Test Site in Nye County, Nevada. Data from the investigation are presented in four parameter groups: sample and site characteristics, U.S. Department of Agriculture (USDA) particle size fractions, chemical parameters, and American Society for Testing Materials-Unified Soil Classification System (ASTM-USCS) particle size fractions. Spread-sheet workbooks based on these parameter groups are presented to evaluate data quality, conduct database updates,and set data structures and formats for later extraction and analysis. This document does not include analysis or interpretation of presented data.

  18. Soil Characterization Database for the Area 3 Radioactive Waste Management Site, Nevada Test Site, Nye County, Nevada

    SciTech Connect

    R. D. Van Remortel; Y. J. Lee; K. E. Snyder

    2005-01-01

    Soils were characterized in an investigation at the Area 3 Radioactive Waste Management Site at the U.S. Department of Energy Nevada Test Site in Nye County, Nevada. Data from the investigation are presented in four parameter groups: sample and site characteristics, U.S. Department of Agriculture (USDA) particle size fractions, chemical parameters, and American Society for Testing Materials-Unified Soil Classification System (ASTM-USCS) particle size fractions. Spread-sheet workbooks based on these parameter groups are presented to evaluate data quality, conduct database updates, and set data structures and formats for later extraction and analysis. This document does not include analysis or interpretation of presented data.

  19. [HPA distribution characteristics of platelet donor population in Mudanjiang area of China and establishment of its database].

    PubMed

    Liu, Bing-Xian; Gao, Guang-Ping; Wang, Dan; Zhang, Yan; Yu, Xiu-Qing; Xia, Dong-Mei; Zhou, Rui-Hua; Zhang, Hua; Ma, Qiang; Liu, Jie

    2012-06-01

    This study was aimed to explore the distribution characteristics of the human platelet antigen (HPA) gene of human platelet donors and its polymorphism in Mudanjiang area of Heilongjiang Province in China, to determine platelet antigen system with clinical significance by judging the rate of incompatibility of HPA, as well as to establish a database of donors' HPA. The genotyping of 154 unrelated platelet donors was performed by means of PCR-SSP. The frequencies of gene and genotype were calculated and compared with that in other areas. The results showed that the genes 1a-17a of HPA-a were all expressed in the 154 healthy and unrelated platelet donors. Only genes 1b, 2b, 3b, 5b, 6b and 15b of HPA-b were expressed while genes 4b, 7b-14b, 16b were not expressed. Among the genotypes, aa homozygosity was predominant and HPA15 had the greatest heterozygosity, while HPA3 had lower heterozygosity. There were 23 combined types of HPA, 5 of them had a rate higher than 10%, and the frequencies of the other 18 were lower than 8%. HPA genotype frequencies showed a good consistency to Hardy-Weinberg equilibrium. It is concluded that the distribution of the allele polymorphism of HPA1-HPA17 in Mudanjiang area has its own characteristics, compared with other areas and some countries, the local HPA genotype database of platelet donors is established in Mudanjiang area, which can provide the matching donors for clinical use with immunological significance.

  20. Planting the SEED: Towards a Spatial Economic Ecological Database for a shared understanding of the Dutch Wadden area

    NASA Astrophysics Data System (ADS)

    Daams, Michiel N.; Sijtsma, Frans J.

    2013-09-01

    In this paper we address the characteristics of a publicly accessible Spatial Economic Ecological Database (SEED) and its ability to support a shared understanding among planners and experts of the economy and ecology of the Dutch Wadden area. Theoretical building blocks for a Wadden SEED are discussed. Our SEED contains a comprehensive set of stakeholder validated spatially explicit data on key economic and ecological indicators. These data extend over various spatial scales. Spatial issues relevant to the specification of a Wadden-SEED and its data needs are explored in this paper and illustrated using empirical data for the Dutch Wadden area. The purpose of the SEED is to integrate basic economic and ecologic information in order to support the resolution of specific (policy) questions and to facilitate connections between project level and strategic level in the spatial planning process. Although modest in its ambitions, we will argue that a Wadden SEED can serve as a valuable element in the much debated science-policy interface. A Wadden SEED is valuable since it is a consensus-based common knowledge base on the economy and ecology of an area rife with ecological-economic conflict, including conflict in which scientific information is often challenged and disputed.

  1. Geologic Map and Map Database of the Oakland Metropolitan Area, Alameda, Contra Costa, and San Francisco Counties, California

    USGS Publications Warehouse

    Graymer, R.W.

    2000-01-01

    Introduction This report contains a new geologic map at 1:50,000 scale, derived from a set of geologic map databases containing information at a resolution associated with 1:24,000 scale, and a new description of geologic map units and structural relationships in the mapped area. The map database represents the integration of previously published reports and new geologic mapping and field checking by the author (see Sources of Data index map on the map sheet or the Arc-Info coverage pi-so and the textfile pi-so.txt). The descriptive text (below) contains new ideas about the Hayward fault and other faults in the East Bay fault system, as well as new ideas about the geologic units and their relations. These new data are released in digital form in conjunction with the Federal Emergency Management Agency Project Impact in Oakland. The goal of Project Impact is to use geologic information in land-use and emergency services planning to reduce the losses occurring during earthquakes, landslides, and other hazardous geologic events. The USGS, California Division of Mines and Geology, FEMA, California Office of Emergency Services, and City of Oakland participated in the cooperative project. The geologic data in this report were provided in pre-release form to other Project Impact scientists, and served as one of the basic data layers for the analysis of hazard related to earthquake shaking, liquifaction, earthquake induced landsliding, and rainfall induced landsliding. The publication of these data provides an opportunity for regional planners, local, state, and federal agencies, teachers, consultants, and others outside Project Impact who are interested in geologic data to have the new data long before a traditional paper map could be published. Because the database contains information about both the bedrock and surficial deposits, it has practical applications in the study of groundwater and engineering of hillside materials, as well as the study of geologic hazards and

  2. Cortical thinning in cognitively normal elderly cohort of 60 to 89 year old from AIBL database and vulnerable brain areas

    NASA Astrophysics Data System (ADS)

    Lin, Zhongmin S.; Avinash, Gopal; Yan, Litao; McMillan, Kathryn

    2014-03-01

    Age-related cortical thinning has been studied by many researchers using quantitative MR images for the past three decades and vastly differing results have been reported. Although results have shown age-related cortical thickening in elderly cohort statistically in some brain regions under certain conditions, cortical thinning in elderly cohort requires further systematic investigation. This paper leverages our previously reported brain surface intensity model (BSIM)1 based technique to measure cortical thickness to study cortical changes due to normal aging. We measured cortical thickness of cognitively normal persons from 60 to 89 years old using Australian Imaging Biomarkers and Lifestyle Study (AIBL) data. MRI brains of 56 healthy people including 29 women and 27 men were selected. We measured average cortical thickness of each individual in eight brain regions: parietal, frontal, temporal, occipital, visual, sensory motor, medial frontal and medial parietal. Unlike the previous published studies, our results showed consistent age-related thinning of cerebral cortex in all brain regions. The parietal, medial frontal and medial parietal showed fastest thinning rates of 0.14, 0.12 and 0.10 mm/decade respectively while the visual region showed the slowest thinning rate of 0.05 mm/decade. In sensorimotor and parietal areas, women showed higher thinning (0.09 and 0.16 mm/decade) than men while in all other regions men showed higher thinning than women. We also created high resolution cortical thinning rate maps of the cohort and compared them to typical patterns of PET metabolic reduction of moderate AD and frontotemporal dementia (FTD). The results seemed to indicate vulnerable areas of cortical deterioration that may lead to brain dementia. These results validate our cortical thickness measurement technique by demonstrating the consistency of the cortical thinning and prediction of cortical deterioration trend with AIBL database.

  3. Digital database architecture and delineation methodology for deriving drainage basins, and a comparison of digitally and non-digitally derived numeric drainage areas

    USGS Publications Warehouse

    Dupree, Jean A.; Crowfoot, Richard M.

    2012-01-01

    The drainage basin is a fundamental hydrologic entity used for studies of surface-water resources and during planning of water-related projects. Numeric drainage areas published by the U.S. Geological Survey water science centers in Annual Water Data Reports and on the National Water Information Systems (NWIS) Web site are still primarily derived from hard-copy sources and by manual delineation of polygonal basin areas on paper topographic map sheets. To expedite numeric drainage area determinations, the Colorado Water Science Center developed a digital database structure and a delineation methodology based on the hydrologic unit boundaries in the National Watershed Boundary Dataset. This report describes the digital database architecture and delineation methodology and also presents the results of a comparison of the numeric drainage areas derived using this digital methodology with those derived using traditional, non-digital methods. (Please see report for full Abstract)

  4. SMALL-SCALE AND GLOBAL DYNAMOS AND THE AREA AND FLUX DISTRIBUTIONS OF ACTIVE REGIONS, SUNSPOT GROUPS, AND SUNSPOTS: A MULTI-DATABASE STUDY

    SciTech Connect

    Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.; Longcope, Dana W.; Senkpeil, Ryan R.; Tlatov, Andrey G.; Nagovitsyn, Yury A.; Pevtsov, Alexei A.; Chapman, Gary A.; Cookson, Angela M.; Yeates, Anthony R.; Watson, Fraser T.; Balmaceda, Laura A.; DeLuca, Edward E.; Martens, Petrus C. H.

    2015-02-10

    In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizes the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)

  5. Comparison of ASTER Global Emissivity Database (ASTER-GED) With In-Situ Measurement In Italian Vulcanic Areas

    NASA Astrophysics Data System (ADS)

    Silvestri, M.; Musacchio, M.; Buongiorno, M. F.; Amici, S.; Piscini, A.

    2015-12-01

    LP DAAC released the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Database (GED) datasets on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivities derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. In this work we compare ground measurements of emissivity acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the ASTER emissivity map extract from ASTER-GED and the emissivity obtained by using single ASTER data. Through this analysis we want to investigate differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. In-situ emissivity measurements have been collected during dedicated fields campaign on Mt. Etna vulcano and Solfatara of Pozzuoli. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  6. Mercury concentrations in fish from Canadian Great Lakes areas of concern: an analysis of data from the Canadian Department of Environment database.

    PubMed

    Weis, I Michael

    2004-07-01

    The tissue mercury concentrations in six species of fish collected at the 17 Areas of Concern identified by the International Joint Commission on the Canadian side of the Great Lakes were analyzed using an Environment Canada database. A linear increase in mercury concentration with fish length was found, but slopes differed among locations. The temporal pattern over the period 1971-1997 differed across species in fish collected in Lake St. Clair; in at least two species there was evidence of increased mercury concentration during the 1990s that had been suggested in an earlier analysis. Areas of Concern differed significantly in observed tissue concentrations. Differences observed did not consistently parallel expectations associated with the historical presence of chlor-alkali plants in the vicinities of some locations. An attempt to correlate the fish tissue mercury concentration with the frequency of occurrence of infantile cerebral palsy at Areas of Concern was unsuccessful.

  7. Corpus Callosum Area and Brain Volume in Autism Spectrum Disorder: Quantitative Analysis of Structural MRI from the ABIDE Database

    ERIC Educational Resources Information Center

    Kucharsky Hiess, R.; Alter, R.; Sojoudi, S.; Ardekani, B. A.; Kuzniecky, R.; Pardoe, H. R.

    2015-01-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial…

  8. Are we safe? A tool to improve the knowledge of the risk areas: high-resolution floods database (MEDIFLOOD) for Spanish Mediterranean coast (1960 -2014)

    NASA Astrophysics Data System (ADS)

    Gil-Guirado, Salvador; Perez-Morales, Alfredo; Lopez-Martinez, Francisco; Barriendos-Vallve, Mariano

    2016-04-01

    The Mediterranean coast of the Iberian Peninsula concentrates an important part of the population and economic activities in Spain. Intensive agriculture, industry in the major urban centers, trade and tourism make this region the main center of economic dynamism and one of the highest rates of population and economic growth of southern Europe. This process accelerated after Franco regime started to be more open to the outside in the early sixties of the last century. The main responsible factor for this process is the climate because of warmer temperatures and a large number of sunny days, which has become in the economic slogan of the area. However, this growth process has happened without proper planning to reduce the impact of other climatic feature of the area, floods. Floods are the natural hazard that generates greater impacts in the area.One of the factors that facilitate the lack of strategic planning is the absence of a correct chronology of flood episodes. In this situation, land use plans, are based on inadequate chronologies that do not report the real risk of the population of this area. To reduce this deficit and contribute to a more efficient zoning of the Mediterranean coast according to their floods risk, we have prepared a high-resolution floods database (MEDIFLOOD) for all the municipalities of the Spanish Mediterranean coast since 1960 until 2013. The methodology consists on exploring the newspaper archives of all newspapers with a presence in the area. The searches have been made by typing the name of each of the 180 municipalities of the Spanish coast followed by 5 key terms. Each identified flood has been classified by dates and according to their level of intensity and type of damage. Additionally, we have consulted the specific bibliography to rule out any data gaps. The results are surprising and worrying. We have identified more than 3,600 cases where a municipality has been affected by floods. These cases are grouped into more than 700

  9. Corpus callosum area and brain volume in autism spectrum disorder: quantitative analysis of structural MRI from the ABIDE database.

    PubMed

    Kucharsky Hiess, R; Alter, R; Sojoudi, S; Ardekani, B A; Kuzniecky, R; Pardoe, H R

    2015-10-01

    Reduced corpus callosum area and increased brain volume are two commonly reported findings in autism spectrum disorder (ASD). We investigated these two correlates in ASD and healthy controls using T1-weighted MRI scans from the Autism Brain Imaging Data Exchange (ABIDE). Automated methods were used to segment the corpus callosum and intracranial region. No difference in the corpus callosum area was found between ASD participants and healthy controls (ASD 598.53 ± 109 mm(2); control 596.82 ± 102 mm(2); p = 0.76). The ASD participants had increased intracranial volume (ASD 1,508,596 ± 170,505 mm(3); control 1,482,732 ± 150,873.5 mm(3); p = 0.042). No evidence was found for overall ASD differences in the corpus callosum subregions.

  10. Map and map database of susceptibility to slope failure by sliding and earthflow in the Oakland area, California

    USGS Publications Warehouse

    Pike, R.J.; Graymer, R.W.; Roberts, Sebastian; Kalman, N.B.; Sobieszczyk, Steven

    2001-01-01

    Map data that predict the varying likelihood of landsliding can help public agencies make informed decisions on land use and zoning. This map, prepared in a geographic information system from a statistical model, estimates the relative likelihood of local slopes to fail by two processes common to an area of diverse geology, terrain, and land use centered on metropolitan Oakland. The model combines the following spatial data: (1) 120 bedrock and surficial geologic-map units, (2) ground slope calculated from a 30-m digital elevation model, (3) an inventory of 6,714 old landslide deposits (not distinguished by age or type of movement and excluding debris flows), and (4) the locations of 1,192 post-1970 landslides that damaged the built environment. The resulting index of likelihood, or susceptibility, plotted as a 1:50,000-scale map, is computed as a continuous variable over a large area (872 km2) at a comparatively fine (30 m) resolution. This new model complements landslide inventories by estimating susceptibility between existing landslide deposits, and improves upon prior susceptibility maps by quantifying the degree of susceptibility within those deposits. Susceptibility is defined for each geologic-map unit as the spatial frequency (areal percentage) of terrain occupied by old landslide deposits, adjusted locally by steepness of the topography. Susceptibility of terrain between the old landslide deposits is read directly from a slope histogram for each geologic-map unit, as the percentage (0.00 to 0.90) of 30-m cells in each one-degree slope interval that coincides with the deposits. Susceptibility within landslide deposits (0.00 to 1.33) is this same percentage raised by a multiplier (1.33) derived from the comparative frequency of recent failures within and outside the old deposits. Positive results from two evaluations of the model encourage its extension to the 10-county San Francisco Bay region and elsewhere. A similar map could be prepared for any area

  11. Atomic Databases

    NASA Astrophysics Data System (ADS)

    Mendoza, Claudio

    2000-10-01

    Atomic and molecular data are required in a variety of fields ranging from the traditional astronomy, atmospherics and fusion research to fast growing technologies such as lasers, lighting, low-temperature plasmas, plasma assisted etching and radiotherapy. In this context, there are some research groups, both theoretical and experimental, scattered round the world that attend to most of this data demand, but the implementation of atomic databases has grown independently out of sheer necessity. In some cases the latter has been associated with the data production process or with data centers involved in data collection and evaluation; but sometimes it has been the result of individual initiatives that have been quite successful. In any case, the development and maintenance of atomic databases call for a number of skills and an entrepreneurial spirit that are not usually associated with most physics researchers. In the present report we present some of the highlights in this area in the past five years and discuss what we think are some of the main issues that have to be addressed.

  12. Development and Validation of a Data-Based Food Frequency Questionnaire for Adults in Eastern Rural Area of Rwanda

    PubMed Central

    Yanagisawa, Ayumi; Sudo, Noriko; Amitani, Yukiko; Caballero, Yuko; Sekiyama, Makiko; Mukamugema, Christine; Matsuoka, Takuya; Imanishi, Hiroaki; Sasaki, Takayo; Matsuda, Hirotaka

    2016-01-01

    This study aimed to develop and evaluate the validity of a food frequency questionnaire (FFQ) for rural Rwandans. Since our FFQ was developed to assess malnutrition, it measured energy, protein, vitamin A, and iron intakes only. We collected 260 weighed food records (WFRs) from a total of 162 Rwandans. Based on the WFR data, we developed a tentative FFQ and examined the food list by percent contribution to energy and nutrient intakes. To assess the validity, nutrient intakes estimated from the FFQ were compared with those calculated from three-day WFRs by correlation coefficient and cross-classification for 17 adults. Cumulative contributions of the 18-item FFQ to the total intakes of energy and nutrients reached nearly 100%. Crude and energy-adjusted correlation coefficients ranged from −0.09 (vitamin A) to 0.58 (protein) and from −0.19 (vitamin A) to 0.68 (iron), respectively. About 50%–60% of the participants were classified into the same tertile. Our FFQ provided acceptable validity for energy and iron intakes and could rank Rwandan adults in eastern rural area correctly according to their energy and iron intakes. PMID:27429558

  13. A spatial database of bedding attitudes to accompany Geologic map of the greater Denver area, Front Range Urban Corridor, Colorado

    USGS Publications Warehouse

    Trimble, Donald E.; Machette, Michael N.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude symbols display over the geographic extent of surficial deposits and rock stratigraphic units (formations) as compiled by Trimble and Machette 1973-1977 and published in 1979 (U.S. Geological Survey Map I-856-H) under the Front Range Urban Corridor Geology Program. Trimble and Machette compiled their geologic map from published geologic maps and unpublished geologic mapping having varied map unit schemes. A convenient feature of the compiled map is its uniform classification of geologic units that mostly matches those of companion maps to the north (USGS I-855-G) and to the south (USGS I-857-F). Published as a color paper map, the Trimble and Machette map was intended for land-use planning in the Front Range Urban Corridor. This map recently (1997-1999), was digitized under the USGS Front Range Infrastructure Resources Project (see cross-reference). In general, the mountainous areas in the west part of the map exhibit various igneous and metamorphic bedrock units of Precambrian age, major faults, and fault brecciation zones at the east margin (5-20 km wide) of the Front Range. The eastern and central parts of the map (Colorado Piedmont) depict a mantle of unconsolidated deposits of Quaternary age and interspersed outcroppings of Cretaceous or Tertiary-Cretaceous sedimentary bedrock. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone, shale, and limestone bedrock formations form hogbacks and intervening valleys.

  14. A spatial database of bedding attitudes to accompany Geologic Map of Boulder-Fort Collins-Greeley Area, Colorado

    USGS Publications Warehouse

    Colton, Roger B.; Brandt, Theodore R.; Moore, David W.; Murray, Kyle E.

    2003-01-01

    This digital map shows bedding attitude data displayed over the geographic extent of rock stratigraphic units (formations) as compiled by Colton in 1976 (U.S.Geological Survey Map I-855-G) under the Front Range Urban Corridor Geology Program. Colton used his own mapping and published geologic maps having varied map unit schemes to compile one map with a uniform classification of geologic units. The resulting published color paper map was intended for planning for use of land in the Front Range Urban Corridor. In 1997-1999, under the USGS Front Range Infrastructure Resources Project, Colton's map was digitized to provide data at 1:100,000 scale to address urban growth issues(see cross-reference). In general, the west part of the map shows a variety of Precambrian igneous and metamorphic rocks, major faults and brecciated zones along an eastern strip (5-20 km wide) of the Front Range. The eastern and central part of the map (Colorado Piedmont) depicts a mantle of Quaternary unconsolidated deposits and interspersed Cretaceous or Tertiary-Cretaceous sedimentary rock outcrops. The Quaternary mantle is comprised of eolian deposits (quartz sand and silt), alluvium (gravel, sand, and silt of variable composition), colluvium, and few landslides. At the mountain front, north-trending, dipping Paleozoic and Mesozoic sandstone and shale formations (and sparse limestone) form hogbacks, intervening valleys, and in range-front folds, anticlines, and fault blocks. Localized dikes and sills of Tertiary rhyodacite and basalt intrude rocks near the range front, mostly in the Boulder area.

  15. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A. G.; Tweedie, C. E.

    2013-12-01

    In 2013, the Barrow Area Information Database (BAID, www.baid.utep.edu) project resumed field operations in Barrow, AK. The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 11,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Highlights for the 2013 season include the addition of more than 2000 additional research sites, providing differential global position system (dGPS) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal, deployment of a wireless sensor network, and substantial upgrades to the BAID website and web mapping applications.

  16. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska.

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Kofoed, K. B.; Copenhaver, W.; Laney, C. M.; Gaylord, A. G.; Collins, J. A.; Tweedie, C. E.

    2014-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. Recent advances include the addition of more than 2000 new research sites, provision of differential global position system (dGPS) and Unmanned Aerial Vehicle (UAV) support to visiting scientists, surveying over 80 miles of coastline to document rates of erosion, training of local GIS personal to better make use of science in local decision making, deployment and near real time connectivity to a wireless micrometeorological sensor network, links to Barrow area datasets housed at national data archives and substantial upgrades to the BAID website and web mapping applications.

  17. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  18. Database Administrator

    ERIC Educational Resources Information Center

    Moore, Pam

    2010-01-01

    The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…

  19. BAID: The Barrow Area Information Database - An Interactive Web Mapping Portal and Cyberinfrastructure Showcasing Scientific Activities in the Vicinity of Barrow, Arctic Alaska.

    NASA Astrophysics Data System (ADS)

    Escarzaga, S. M.; Cody, R. P.; Kassin, A.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Mazza Ramsay, F. D.; Vargas, S. A., Jr.; Tarin, G.; Laney, C. M.; Villarreal, S.; Aiken, Q.; Collins, J. A.; Green, E.; Nelson, L.; Tweedie, C. E.

    2015-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic and the Barrow Area Information Database (BAID, www.barrowmapped.org) tracks and facilitates a gamut of research, management, and educational activities in the area. BAID is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 12,000 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, save or print maps and query results, and filter or view information by space, time, and/or other tags. Additionally, data are described with metadata that meet Federal Geographic Data Committee standards. Recent advances include the addition of more than 2000 new research sites, the addition of a query builder user interface allowing rich and complex queries, and provision of differential global position system (dGPS) and high-resolution aerial imagery support to visiting scientists. Recent field surveys include over 80 miles of coastline to document rates of erosion and the collection of high-resolution sonar data for bathymetric mapping of Elson Lagoon and near shore region of the Chukchi Sea. A network of five climate stations has been deployed across the peninsula to serve as a wireless net for the research community and to deliver near real time climatic data to the user community. Local GIS personal have also been trained to better make use of scientific data for local decision making. Links to Barrow area datasets are housed at national data archives and substantial upgrades have

  20. BAID: The Barrow Area Information Database - an interactive web mapping portal and cyberinfrastructure for scientific activities in the vicinity of Barrow, Alaska

    NASA Astrophysics Data System (ADS)

    Cody, R. P.; Kassin, A.; Gaylord, A.; Brown, J.; Tweedie, C. E.

    2012-12-01

    The Barrow area of northern Alaska is one of the most intensely researched locations in the Arctic. The Barrow Area Information Database (BAID, www.baidims.org) is a cyberinfrastructure (CI) that details much of the historic and extant research undertaken within in the Barrow region in a suite of interactive web-based mapping and information portals (geobrowsers). The BAID user community and target audience for BAID is diverse and includes research scientists, science logisticians, land managers, educators, students, and the general public. BAID contains information on more than 9,600 Barrow area research sites that extend back to the 1940's and more than 640 remote sensing images and geospatial datasets. In a web-based setting, users can zoom, pan, query, measure distance, and save or print maps and query results. Data are described with metadata that meet Federal Geographic Data Committee standards and are archived at the University Corporation for Atmospheric Research Earth Observing Laboratory (EOL) where non-proprietary BAID data can be freely downloaded. BAID has been used to: Optimize research site choice; Reduce duplication of science effort; Discover complementary and potentially detrimental research activities in an area of scientific interest; Re-establish historical research sites for resampling efforts assessing change in ecosystem structure and function over time; Exchange knowledge across disciplines and generations; Facilitate communication between western science and traditional ecological knowledge; Provide local residents access to science data that facilitates adaptation to arctic change; (and) Educate the next generation of environmental and computer scientists. This poster describes key activities that will be undertaken over the next three years to provide BAID users with novel software tools to interact with a current and diverse selection of information and data about the Barrow area. Key activities include: 1. Collecting data on research

  1. Analysis of expressed sequence tags from Actinidia: applications of a cross species EST database for gene discovery in the areas of flavor, health, color and ripening

    PubMed Central

    Crowhurst, Ross N; Gleave, Andrew P; MacRae, Elspeth A; Ampomah-Dwamena, Charles; Atkinson, Ross G; Beuning, Lesley L; Bulley, Sean M; Chagne, David; Marsh, Ken B; Matich, Adam J; Montefiori, Mirco; Newcomb, Richard D; Schaffer, Robert J; Usadel, Björn; Allan, Andrew C; Boldingh, Helen L; Bowen, Judith H; Davy, Marcus W; Eckloff, Rheinhart; Ferguson, A Ross; Fraser, Lena G; Gera, Emma; Hellens, Roger P; Janssen, Bart J; Klages, Karin; Lo, Kim R; MacDiarmid, Robin M; Nain, Bhawana; McNeilage, Mark A; Rassam, Maysoon; Richardson, Annette C; Rikkerink, Erik HA; Ross, Gavin S; Schröder, Roswitha; Snowden, Kimberley C; Souleyre, Edwige JF; Templeton, Matt D; Walton, Eric F; Wang, Daisy; Wang, Mindy Y; Wang, Yanming Y; Wood, Marion; Wu, Rongmei; Yauk, Yar-Khing; Laing, William A

    2008-01-01

    Background Kiwifruit (Actinidia spp.) are a relatively new, but economically important crop grown in many different parts of the world. Commercial success is driven by the development of new cultivars with novel consumer traits including flavor, appearance, healthful components and convenience. To increase our understanding of the genetic diversity and gene-based control of these key traits in Actinidia, we have produced a collection of 132,577 expressed sequence tags (ESTs). Results The ESTs were derived mainly from four Actinidia species (A. chinensis, A. deliciosa, A. arguta and A. eriantha) and fell into 41,858 non redundant clusters (18,070 tentative consensus sequences and 23,788 EST singletons). Analysis of flavor and fragrance-related gene families (acyltransferases and carboxylesterases) and pathways (terpenoid biosynthesis) is presented in comparison with a chemical analysis of the compounds present in Actinidia including esters, acids, alcohols and terpenes. ESTs are identified for most genes in color pathways controlling chlorophyll degradation and carotenoid biosynthesis. In the health area, data are presented on the ESTs involved in ascorbic acid and quinic acid biosynthesis showing not only that genes for many of the steps in these pathways are represented in the database, but that genes encoding some critical steps are absent. In the convenience area, genes related to different stages of fruit softening are identified. Conclusion This large EST resource will allow researchers to undertake the tremendous challenge of understanding the molecular basis of genetic diversity in the Actinidia genus as well as provide an EST resource for comparative fruit genomics. The various bioinformatics analyses we have undertaken demonstrates the extent of coverage of ESTs for genes encoding different biochemical pathways in Actinidia. PMID:18655731

  2. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  3. Maize databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This chapter is a succinct overview of maize data held in the species-specific database MaizeGDB (the Maize Genomics and Genetics Database), and selected multi-species data repositories, such as Gramene/Ensembl Plants, Phytozome, UniProt and the National Center for Biotechnology Information (NCBI), ...

  4. Genome databases

    SciTech Connect

    Courteau, J.

    1991-10-11

    Since the Genome Project began several years ago, a plethora of databases have been developed or are in the works. They range from the massive Genome Data Base at Johns Hopkins University, the central repository of all gene mapping information, to small databases focusing on single chromosomes or organisms. Some are publicly available, others are essentially private electronic lab notebooks. Still others limit access to a consortium of researchers working on, say, a single human chromosome. An increasing number incorporate sophisticated search and analytical software, while others operate as little more than data lists. In consultation with numerous experts in the field, a list has been compiled of some key genome-related databases. The list was not limited to map and sequence databases but also included the tools investigators use to interpret and elucidate genetic data, such as protein sequence and protein structure databases. Because a major goal of the Genome Project is to map and sequence the genomes of several experimental animals, including E. coli, yeast, fruit fly, nematode, and mouse, the available databases for those organisms are listed as well. The author also includes several databases that are still under development - including some ambitious efforts that go beyond data compilation to create what are being called electronic research communities, enabling many users, rather than just one or a few curators, to add or edit the data and tag it as raw or confirmed.

  5. A first database for landslide studies in densely urbanized areas of the intertropical zone: Abidjan, Côte d'Ivoire

    NASA Astrophysics Data System (ADS)

    Gnagne, Frédéric; Demoulin, Alain; Biemi, Jean; Dewitte, Olivier; Kouadio, Hélène; Lasm, Théophile

    2016-04-01

    Landslides, a natural phenomenon often enhanced by human misuse of the land, may be a considerable threat to urban communities and severely affect urban landscapes, taking its death toll, impacting livelihood, and causing economic and social damages. Our first results show that, in Abidjan city, Ivory Coast, landslides caused more than fifty casualties in the towns of Attecoube and Abobo during the last twenty years. Although informal landslide reports exist, map information and geomorphological characterization are at best restricted, or often simply lacking. Here, we aim at constituting a comprehensive landslide database (localization, nature and morphometry of the slides, slope material, human interference, elements at risk) in the town of Attecoube as case study in order to support a first analysis of landslide susceptibility in the area. The field inventory conducted so far contains 56 landslides. These are mainly translational debris and soil slides, plus a few deeper rotational soil slides. Affecting 10-25°-steep, less than 10-m-high slopes in Quaternary sand and mud, they are most often associated with wild constructions either loading the top or cutting the toe of the slopes. They were located by GPS and tentatively dated through inquiries during the survey. While 12 landslides were accurately dated that way from the main rain seasons of 2013 to 2015, newspapers analysis and municipal archive consultation allowed us to assign a part of the rest to the last decade. Field inquiries were also used to collect information about fatalities and the local conditions of landsliding. This first landslide inventory in Attecoube provides clues about the main potential controls on landsliding, natural and anthropogenic, and will help define adequately anthropogenic variables to be used in the susceptibility modelling.

  6. Experiment Databases

    NASA Astrophysics Data System (ADS)

    Vanschoren, Joaquin; Blockeel, Hendrik

    Next to running machine learning algorithms based on inductive queries, much can be learned by immediately querying the combined results of many prior studies. Indeed, all around the globe, thousands of machine learning experiments are being executed on a daily basis, generating a constant stream of empirical information on machine learning techniques. While the information contained in these experiments might have many uses beyond their original intent, results are typically described very concisely in papers and discarded afterwards. If we properly store and organize these results in central databases, they can be immediately reused for further analysis, thus boosting future research. In this chapter, we propose the use of experiment databases: databases designed to collect all the necessary details of these experiments, and to intelligently organize them in online repositories to enable fast and thorough analysis of a myriad of collected results. They constitute an additional, queriable source of empirical meta-data based on principled descriptions of algorithm executions, without reimplementing the algorithms in an inductive database. As such, they engender a very dynamic, collaborative approach to experimentation, in which experiments can be freely shared, linked together, and immediately reused by researchers all over the world. They can be set up for personal use, to share results within a lab or to create open, community-wide repositories. Here, we provide a high-level overview of their design, and use an existing experiment database to answer various interesting research questions about machine learning algorithms and to verify a number of recent studies.

  7. Solubility Database

    National Institute of Standards and Technology Data Gateway

    SRD 106 IUPAC-NIST Solubility Database (Web, free access)   These solubilities are compiled from 18 volumes (Click here for List) of the International Union for Pure and Applied Chemistry(IUPAC)-NIST Solubility Data Series. The database includes liquid-liquid, solid-liquid, and gas-liquid systems. Typical solvents and solutes include water, seawater, heavy water, inorganic compounds, and a variety of organic compounds such as hydrocarbons, halogenated hydrocarbons, alcohols, acids, esters and nitrogen compounds. There are over 67,500 solubility measurements and over 1800 references.

  8. GIS for the Gulf: A reference database for hurricane-affected areas: Chapter 4C in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    Greenlee, Dave

    2007-01-01

    A week after Hurricane Katrina made landfall in Louisiana, a collaboration among multiple organizations began building a database called the Geographic Information System for the Gulf, shortened to "GIS for the Gulf," to support the geospatial data needs of people in the hurricane-affected area. Data were gathered from diverse sources and entered into a consistent and standardized data model in a manner that is Web accessible.

  9. Mathematical Notation in Bibliographic Databases.

    ERIC Educational Resources Information Center

    Pasterczyk, Catherine E.

    1990-01-01

    Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…

  10. Medical database security evaluation.

    PubMed

    Pangalos, G J

    1993-01-01

    Users of medical information systems need confidence in the security of the system they are using. They also need a method to evaluate and compare its security capabilities. Every system has its own requirements for maintaining confidentiality, integrity and availability. In order to meet these requirements a number of security functions must be specified covering areas such as access control, auditing, error recovery, etc. Appropriate confidence in these functions is also required. The 'trust' in trusted computer systems rests on their ability to prove that their secure mechanisms work as advertised and cannot be disabled or diverted. The general framework and requirements for medical database security and a number of parameters of the evaluation problem are presented and discussed. The problem of database security evaluation is then discussed, and a number of specific proposals are presented, based on a number of existing medical database security systems.

  11. Biological Databases for Behavioral Neurobiology

    PubMed Central

    Baker, Erich J.

    2014-01-01

    Databases are, at their core, abstractions of data and their intentionally derived relationships. They serve as a central organizing metaphor and repository, supporting or augmenting nearly all bioinformatics. Behavioral domains provide a unique stage for contemporary databases, as research in this area spans diverse data types, locations, and data relationships. This chapter provides foundational information on the diversity and prevalence of databases, how data structures support the various needs of behavioral neuroscience analysis and interpretation. The focus is on the classes of databases, data curation, and advanced applications in bioinformatics using examples largely drawn from research efforts in behavioral neuroscience. PMID:23195119

  12. Stackfile Database

    NASA Technical Reports Server (NTRS)

    deVarvalho, Robert; Desai, Shailen D.; Haines, Bruce J.; Kruizinga, Gerhard L.; Gilmer, Christopher

    2013-01-01

    This software provides storage retrieval and analysis functionality for managing satellite altimetry data. It improves the efficiency and analysis capabilities of existing database software with improved flexibility and documentation. It offers flexibility in the type of data that can be stored. There is efficient retrieval either across the spatial domain or the time domain. Built-in analysis tools are provided for frequently performed altimetry tasks. This software package is used for storing and manipulating satellite measurement data. It was developed with a focus on handling the requirements of repeat-track altimetry missions such as Topex and Jason. It was, however, designed to work with a wide variety of satellite measurement data [e.g., Gravity Recovery And Climate Experiment -- GRACE). The software consists of several command-line tools for importing, retrieving, and analyzing satellite measurement data.

  13. Database tomography for commercial application

    NASA Technical Reports Server (NTRS)

    Kostoff, Ronald N.; Eberhart, Henry J.

    1994-01-01

    Database tomography is a method for extracting themes and their relationships from text. The algorithms, employed begin with word frequency and word proximity analysis and build upon these results. When the word 'database' is used, think of medical or police records, patents, journals, or papers, etc. (any text information that can be computer stored). Database tomography features a full text, user interactive technique enabling the user to identify areas of interest, establish relationships, and map trends for a deeper understanding of an area of interest. Database tomography concepts and applications have been reported in journals and presented at conferences. One important feature of the database tomography algorithm is that it can be used on a database of any size, and will facilitate the users ability to understand the volume of content therein. While employing the process to identify research opportunities it became obvious that this promising technology has potential applications for business, science, engineering, law, and academe. Examples include evaluating marketing trends, strategies, relationships and associations. Also, the database tomography process would be a powerful component in the area of competitive intelligence, national security intelligence and patent analysis. User interests and involvement cannot be overemphasized.

  14. The CEBAF Element Database

    SciTech Connect

    Theodore Larrieu, Christopher Slominski, Michele Joyce

    2011-03-01

    With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly with no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.

  15. Open Geoscience Database

    NASA Astrophysics Data System (ADS)

    Bashev, A.

    2012-04-01

    Currently there is an enormous amount of various geoscience databases. Unfortunately the only users of the majority of the databases are their elaborators. There are several reasons for that: incompaitability, specificity of tasks and objects and so on. However the main obstacles for wide usage of geoscience databases are complexity for elaborators and complication for users. The complexity of architecture leads to high costs that block the public access. The complication prevents users from understanding when and how to use the database. Only databases, associated with GoogleMaps don't have these drawbacks, but they could be hardly named "geoscience" Nevertheless, open and simple geoscience database is necessary at least for educational purposes (see our abstract for ESSI20/EOS12). We developed a database and web interface to work with them and now it is accessible at maps.sch192.ru. In this database a result is a value of a parameter (no matter which) in a station with a certain position, associated with metadata: the date when the result was obtained; the type of a station (lake, soil etc); the contributor that sent the result. Each contributor has its own profile, that allows to estimate the reliability of the data. The results can be represented on GoogleMaps space image as a point in a certain position, coloured according to the value of the parameter. There are default colour scales and each registered user can create the own scale. The results can be also extracted in *.csv file. For both types of representation one could select the data by date, object type, parameter type, area and contributor. The data are uploaded in *.csv format: Name of the station; Lattitude(dd.dddddd); Longitude(ddd.dddddd); Station type; Parameter type; Parameter value; Date(yyyy-mm-dd). The contributor is recognised while entering. This is the minimal set of features that is required to connect a value of a parameter with a position and see the results. All the complicated data

  16. Protein Model Database

    SciTech Connect

    Fidelis, K; Adzhubej, A; Kryshtafovych, A; Daniluk, P

    2005-02-23

    The phenomenal success of the genome sequencing projects reveals the power of completeness in revolutionizing biological science. Currently it is possible to sequence entire organisms at a time, allowing for a systemic rather than fractional view of their organization and the various genome-encoded functions. There is an international plan to move towards a similar goal in the area of protein structure. This will not be achieved by experiment alone, but rather by a combination of efforts in crystallography, NMR spectroscopy, and computational modeling. Only a small fraction of structures are expected to be identified experimentally, the remainder to be modeled. Presently there is no organized infrastructure to critically evaluate and present these data to the biological community. The goal of the Protein Model Database project is to create such infrastructure, including (1) public database of theoretically derived protein structures; (2) reliable annotation of protein model quality, (3) novel structure analysis tools, and (4) access to the highest quality modeling techniques available.

  17. Preliminary integrated geologic map databases for the United States: Digital data for the reconnaissance bedrock geologic map for the northern Alaska peninsula area, southwest Alaska

    USGS Publications Warehouse

    ,

    2006-01-01

    he growth in the use of Geographic Information Systems (GIS) has highlighted the need for digital geologic maps that have been attributed with information about age and lithology. Such maps can be conveniently used to generate derivative maps for manifold special purposes such as mineral-resource assessment, metallogenic studies, tectonic studies, and environmental research. This report is part of a series of integrated geologic map databases that cover the entire United States. Three national-scale geologic maps that portray most or all of the United States already exist; for the conterminous U.S., King and Beikman (1974a,b) compiled a map at a scale of 1:2,500,000, Beikman (1980) compiled a map for Alaska at 1:2,500,000 scale, and for the entire U.S., Reed and others (2005a,b) compiled a map at a scale of 1:5,000,000. A digital version of the King and Beikman map was published by Schruben and others (1994). Reed and Bush (2004) produced a digital version of the Reed and others (2005a) map for the conterminous U.S. The present series of maps is intended to provide the next step in increased detail. State geologic maps that range in scale from 1:100,000 to 1:1,000,000 are available for most of the country, and digital versions of these state maps are the basis of this product. The digital geologic maps presented here are in a standardized format as ARC/INFO export files and as ArcView shape files. Data tables that relate the map units to detailed lithologic and age information accompany these GIS files. The map is delivered as a set 1:250,000-scale quadrangle files. To the best of our ability, these quadrangle files are edge-matched with respect to geology. When the maps are merged, the combined attribute tables can be used directly with the merged maps to make derivative maps.

  18. Draft secure medical database standard.

    PubMed

    Pangalos, George

    2002-01-01

    Medical database security is a particularly important issue for all Healthcare establishments. Medical information systems are intended to support a wide range of pertinent health issues today, for example: assure the quality of care, support effective management of the health services institutions, monitor and contain the cost of care, implement technology into care without violating social values, ensure the equity and availability of care, preserve humanity despite the proliferation of technology etc.. In this context, medical database security aims primarily to support: high availability, accuracy and consistency of the stored data, the medical professional secrecy and confidentiality, and the protection of the privacy of the patient. These properties, though of technical nature, basically require that the system is actually helpful for medical care and not harmful to patients. These later properties require in turn not only that fundamental ethical principles are not violated by employing database systems, but instead, are effectively enforced by technical means. This document reviews the existing and emerging work on the security of medical database systems. It presents in detail the related problems and requirements related to medical database security. It addresses the problems of medical database security policies, secure design methodologies and implementation techniques. It also describes the current legal framework and regulatory requirements for medical database security. The issue of medical database security guidelines is also examined in detailed. The current national and international efforts in the area are studied. It also gives an overview of the research work in the area. The document also presents in detail the most complete to our knowledge set of security guidelines for the development and operation of medical database systems.

  19. Database Support for Research in Public Administration

    ERIC Educational Resources Information Center

    Tucker, James Cory

    2005-01-01

    This study examines the extent to which databases support student and faculty research in the area of public administration. A list of journals in public administration, public policy, political science, public budgeting and finance, and other related areas was compared to the journal content list of six business databases. These databases…

  20. Global Cropland Area Database (GCAD) derived from Remote Sensing in Support of Food Security in the Twenty-first Century: Current Achievements and Future Possibilities

    USGS Publications Warehouse

    Teluguntla, Pardhasaradhi G.; Thenkabail, Prasad S.; Xiong, Jun N.; Gumma, Murali Krishna; Giri, Chandra; Milesi, Cristina; Ozdogan, Mutlu; Congalton, Russ; Tilton, James; Sankey, Temuulen Tsagaan; Massey, Richard; Phalke, Aparna; Yadav, Kamini

    2015-01-01

    The precise estimation of the global agricultural cropland- extents, areas, geographic locations, crop types, cropping intensities, and their watering methods (irrigated or rainfed; type of irrigation) provides a critical scientific basis for the development of water and food security policies (Thenkabail et al., 2012, 2011, 2010). By year 2100, the global human population is expected to grow to 10.4 billion under median fertility variants or higher under constant or higher fertility variants (Table 1) with over three quarters living in developing countries, in regions that already lack the capacity to produce enough food. With current agricultural practices, the increased demand for food and nutrition would require in about 2 billion hectares of additional cropland, about twice the equivalent to the land area of the United States, and lead to significant increases in greenhouse gas productions (Tillman et al., 2011). For example, during 1960-2010 world population more than doubled from 3 billion to 7 billion. The nutritional demand of the population also grew swiftly during this period from an average of about 2000 calories per day per person in 1960 to nearly 3000 calories per day per person in 2010. The food demand of increased population along with increased nutritional demand during this period (1960-2010) was met by the “green revolution” which more than tripled the food production; even though croplands decreased from about 0.43 ha/capita to 0.26 ha/capita (FAO, 2009). The increase in food production during the green revolution was the result of factors such as: (a) expansion in irrigated areas which increased from 130 Mha in 1960s to 278.4 Mha in year 2000 (Siebert et al., 2006) or 399 Mha when you do not consider cropping intensity (Thenkabail et al., 2009a, 2009b, 2009c) or 467 Mha when you consider cropping intensity (Thenkabail et al., 2009a; Thenkabail et al., 2009c); (b) increase in yield and per capita food production (e.g., cereal production

  1. JICST Factual Database JICST DNA Database

    NASA Astrophysics Data System (ADS)

    Shirokizawa, Yoshiko; Abe, Atsushi

    Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.

  2. Reflective Database Access Control

    ERIC Educational Resources Information Center

    Olson, Lars E.

    2009-01-01

    "Reflective Database Access Control" (RDBAC) is a model in which a database privilege is expressed as a database query itself, rather than as a static privilege contained in an access control list. RDBAC aids the management of database access controls by improving the expressiveness of policies. However, such policies introduce new interactions…

  3. Databases: Beyond the Basics.

    ERIC Educational Resources Information Center

    Whittaker, Robert

    This presented paper offers an elementary description of database characteristics and then provides a survey of databases that may be useful to the teacher and researcher in Slavic and East European languages and literatures. The survey focuses on commercial databases that are available, usable, and needed. Individual databases discussed include:…

  4. Human Mitochondrial Protein Database

    National Institute of Standards and Technology Data Gateway

    SRD 131 Human Mitochondrial Protein Database (Web, free access)   The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.

  5. UGTA Photograph Database

    SciTech Connect

    NSTec Environmental Restoration

    2009-04-20

    One of the advantages of the Nevada Test Site (NTS) is that most of the geologic and hydrologic features such as hydrogeologic units (HGUs), hydrostratigraphic units (HSUs), and faults, which are important aspects of flow and transport modeling, are exposed at the surface somewhere in the vicinity of the NTS and thus are available for direct observation. However, due to access restrictions and the remote locations of many of the features, most Underground Test Area (UGTA) participants cannot observe these features directly in the field. Fortunately, National Security Technologies, LLC, geologists and their predecessors have photographed many of these features through the years. During fiscal year 2009, work was done to develop an online photograph database for use by the UGTA community. Photographs were organized, compiled, and imported into Adobe® Photoshop® Elements 7. The photographs were then assigned keyword tags such as alteration type, HGU, HSU, location, rock feature, rock type, and stratigraphic unit. Some fully tagged photographs were then selected and uploaded to the UGTA website. This online photograph database provides easy access for all UGTA participants and can help “ground truth” their analytical and modeling tasks. It also provides new participants a resource to more quickly learn the geology and hydrogeology of the NTS.

  6. The Status of Statewide Subscription Databases

    ERIC Educational Resources Information Center

    Krueger, Karla S.

    2012-01-01

    This qualitative content analysis presents subscription databases available to school libraries through statewide purchases. The results may help school librarians evaluate grade and subject-area coverage, make comparisons to recommended databases, and note potential suggestions for their states to include in future contracts or for local…

  7. Developing Database Files for Student Use.

    ERIC Educational Resources Information Center

    Warner, Michael

    1988-01-01

    Presents guidelines for creating student database files that supplement classroom teaching. Highlights include determining educational objectives, planning the database with computer specialists and subject area specialists, data entry, and creating student worksheets. Specific examples concerning elements of the periodic table and…

  8. Electronic Reference Library: Silverplatter's Database Networking Solution.

    ERIC Educational Resources Information Center

    Millea, Megan

    Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…

  9. Online Database Coverage of Forensic Medicine.

    ERIC Educational Resources Information Center

    Snow, Bonnie; Ifshin, Steven L.

    1984-01-01

    Online seaches of sample topics in the area of forensic medicine were conducted in the following life science databases: Biosis Previews, Excerpta Medica, Medline, Scisearch, and Chemical Abstracts Search. Search outputs analyzed according to criteria of recall, uniqueness, overlap, and utility reveal the need for a cross-database approach to…

  10. Physiological Information Database (PID)

    EPA Science Inventory

    EPA has developed a physiological information database (created using Microsoft ACCESS) intended to be used in PBPK modeling. The database contains physiological parameter values for humans from early childhood through senescence as well as similar data for laboratory animal spec...

  11. THE ECOTOX DATABASE

    EPA Science Inventory

    The database provides chemical-specific toxicity information for aquatic life, terrestrial plants, and terrestrial wildlife. ECOTOX is a comprehensive ecotoxicology database and is therefore essential for providing and suppoirting high quality models needed to estimate population...

  12. Aviation Safety Issues Database

    NASA Technical Reports Server (NTRS)

    Morello, Samuel A.; Ricks, Wendell R.

    2009-01-01

    The aviation safety issues database was instrumental in the refinement and substantiation of the National Aviation Safety Strategic Plan (NASSP). The issues database is a comprehensive set of issues from an extremely broad base of aviation functions, personnel, and vehicle categories, both nationally and internationally. Several aviation safety stakeholders such as the Commercial Aviation Safety Team (CAST) have already used the database. This broader interest was the genesis to making the database publically accessible and writing this report.

  13. Scopus database: a review.

    PubMed

    Burnham, Judy F

    2006-03-08

    The Scopus database provides access to STM journal articles and the references included in those articles, allowing the searcher to search both forward and backward in time. The database can be used for collection development as well as for research. This review provides information on the key points of the database and compares it to Web of Science. Neither database is inclusive, but complements each other. If a library can only afford one, choice must be based in institutional needs.

  14. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Hayase, Shuichi; Okano, Keiko

    Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST Crystal Structure Database (JICST CR) in this January (1990). This database provides the information of atomic positions in a crystal and related informations of the crystal. The database system and the crystal data in JICST CR are outlined in this manuscript.

  15. Map-Based Querying for Multimedia Database

    DTIC Science & Technology

    2014-09-01

    existing assets in a custom multimedia database based on an area of interest. It also describes the augmentation of an Android Tactical Assault Kit (ATAK...to allow for selection and specification of an area of interest. 15. SUBJECT TERMS android web service client, map based database query, android ...a custom Web service and augmentation of the Android Tactical Assault Kit (ATAK), version 2.0, to gather spatial information from the map engine

  16. NCSL National Measurement Interlaboratory Comparison Database requirements

    SciTech Connect

    WHEELER,JAMES C.; PETTIT,RICHARD B.

    2000-04-20

    With the recent development of an International Comparisons Database which provides worldwide access to measurement comparison data between National Measurement Institutes, there is currently renewed interest in developing a database of comparisons for calibration laboratories within a country. For many years, the National Conference of Standards Laboratories (NCSL), through the Measurement Comparison Programs Committee, has sponsored Interlaboratory Comparisons in a variety of measurement areas. This paper will discuss the need for such a National database which catalogues and maintains Interlaboratory Comparisons data. The paper will also discuss future requirements in this area.

  17. Environmental databases and other computerized information tools

    NASA Technical Reports Server (NTRS)

    Clark-Ingram, Marceia

    1995-01-01

    Increasing environmental legislation has brought about the development of many new environmental databases and software application packages to aid in the quest for environmental compliance. These databases and software packages are useful tools and applicable to a wide range of environmental areas from atmospheric modeling to materials replacement technology. The great abundance of such products and services can be very overwhelming when trying to identify the tools which best meet specific needs. This paper will discuss the types of environmental databases and software packages available. This discussion will also encompass the affected environmental areas of concern, product capabilities, and hardware requirements for product utilization.

  18. The NCBI Taxonomy database.

    PubMed

    Federhen, Scott

    2012-01-01

    The NCBI Taxonomy database (http://www.ncbi.nlm.nih.gov/taxonomy) is the standard nomenclature and classification repository for the International Nucleotide Sequence Database Collaboration (INSDC), comprising the GenBank, ENA (EMBL) and DDBJ databases. It includes organism names and taxonomic lineages for each of the sequences represented in the INSDC's nucleotide and protein sequence databases. The taxonomy database is manually curated by a small group of scientists at the NCBI who use the current taxonomic literature to maintain a phylogenetic taxonomy for the source organisms represented in the sequence databases. The taxonomy database is a central organizing hub for many of the resources at the NCBI, and provides a means for clustering elements within other domains of NCBI web site, for internal linking between domains of the Entrez system and for linking out to taxon-specific external resources on the web. Our primary purpose is to index the domain of sequences as conveniently as possible for our user community.

  19. IDPredictor: predict database links in biomedical database.

    PubMed

    Mehlhorn, Hendrik; Lange, Matthias; Scholz, Uwe; Schreiber, Falk

    2012-06-26

    Knowledge found in biomedical databases, in particular in Web information systems, is a major bioinformatics resource. In general, this biological knowledge is worldwide represented in a network of databases. These data is spread among thousands of databases, which overlap in content, but differ substantially with respect to content detail, interface, formats and data structure. To support a functional annotation of lab data, such as protein sequences, metabolites or DNA sequences as well as a semi-automated data exploration in information retrieval environments, an integrated view to databases is essential. Search engines have the potential of assisting in data retrieval from these structured sources, but fall short of providing a comprehensive knowledge except out of the interlinked databases. A prerequisite of supporting the concept of an integrated data view is to acquire insights into cross-references among database entities. This issue is being hampered by the fact, that only a fraction of all possible cross-references are explicitely tagged in the particular biomedical informations systems. In this work, we investigate to what extend an automated construction of an integrated data network is possible. We propose a method that predicts and extracts cross-references from multiple life science databases and possible referenced data targets. We study the retrieval quality of our method and report on first, promising results. The method is implemented as the tool IDPredictor, which is published under the DOI 10.5447/IPK/2012/4 and is freely available using the URL: http://dx.doi.org/10.5447/IPK/2012/4.

  20. An Introduction to Database Structure and Database Machines.

    ERIC Educational Resources Information Center

    Detweiler, Karen

    1984-01-01

    Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…

  1. Geochronology Database for Central Colorado

    USGS Publications Warehouse

    Klein, T.L.; Evans, K.V.; deWitt, E.H.

    2010-01-01

    This database is a compilation of published and some unpublished isotopic and fission track age determinations in central Colorado. The compiled area extends from the southern Wyoming border to the northern New Mexico border and from approximately the longitude of Denver on the east to Gunnison on the west. Data for the tephrochronology of Pleistocene volcanic ash, carbon-14, Pb-alpha, common-lead, and U-Pb determinations on uranium ore minerals have been excluded.

  2. The CATDAT damaging earthquakes database

    NASA Astrophysics Data System (ADS)

    Daniell, J. E.; Khazai, B.; Wenzel, F.; Vervaeck, A.

    2011-08-01

    The global CATDAT damaging earthquakes and secondary effects (tsunami, fire, landslides, liquefaction and fault rupture) database was developed to validate, remove discrepancies, and expand greatly upon existing global databases; and to better understand the trends in vulnerability, exposure, and possible future impacts of such historic earthquakes. Lack of consistency and errors in other earthquake loss databases frequently cited and used in analyses was a major shortcoming in the view of the authors which needed to be improved upon. Over 17 000 sources of information have been utilised, primarily in the last few years, to present data from over 12 200 damaging earthquakes historically, with over 7000 earthquakes since 1900 examined and validated before insertion into the database. Each validated earthquake includes seismological information, building damage, ranges of social losses to account for varying sources (deaths, injuries, homeless, and affected), and economic losses (direct, indirect, aid, and insured). Globally, a slightly increasing trend in economic damage due to earthquakes is not consistent with the greatly increasing exposure. The 1923 Great Kanto (214 billion USD damage; 2011 HNDECI-adjusted dollars) compared to the 2011 Tohoku (>300 billion USD at time of writing), 2008 Sichuan and 1995 Kobe earthquakes show the increasing concern for economic loss in urban areas as the trend should be expected to increase. Many economic and social loss values not reported in existing databases have been collected. Historical GDP (Gross Domestic Product), exchange rate, wage information, population, HDI (Human Development Index), and insurance information have been collected globally to form comparisons. This catalogue is the largest known cross-checked global historic damaging earthquake database and should have far-reaching consequences for earthquake loss estimation, socio-economic analysis, and the global reinsurance field.

  3. Online Bibliographic Searching in the Humanities Databases: An Introduction.

    ERIC Educational Resources Information Center

    Suresh, Raghini S.

    Numerous easily accessible databases cover almost every subject area in the humanities. The principal database resources in the humanities are described. There are two major database vendors for humanities information: BRS (Bibliographic Retrieval Services) and DIALOG Information Services, Inc. As an introduction to online searching, this article…

  4. 2010 Worldwide Gasification Database

    DOE Data Explorer

    The 2010 Worldwide Gasification Database describes the current world gasification industry and identifies near-term planned capacity additions. The database lists gasification projects and includes information (e.g., plant location, number and type of gasifiers, syngas capacity, feedstock, and products). The database reveals that the worldwide gasification capacity has continued to grow for the past several decades and is now at 70,817 megawatts thermal (MWth) of syngas output at 144 operating plants with a total of 412 gasifiers.

  5. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  6. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  7. Mugshot Identification Database (MID)

    National Institute of Standards and Technology Data Gateway

    NIST Mugshot Identification Database (MID) (PC database for purchase)   NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size using lossless compression. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.

  8. Databases for Microbiologists

    DOE PAGES

    Zhulin, Igor B.

    2015-05-26

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. Finally, the purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists.

  9. Databases for Microbiologists

    PubMed Central

    2015-01-01

    Databases play an increasingly important role in biology. They archive, store, maintain, and share information on genes, genomes, expression data, protein sequences and structures, metabolites and reactions, interactions, and pathways. All these data are critically important to microbiologists. Furthermore, microbiology has its own databases that deal with model microorganisms, microbial diversity, physiology, and pathogenesis. Thousands of biological databases are currently available, and it becomes increasingly difficult to keep up with their development. The purpose of this minireview is to provide a brief survey of current databases that are of interest to microbiologists. PMID:26013493

  10. HIV Sequence Databases

    PubMed Central

    Kuiken, Carla; Korber, Bette; Shafer, Robert W.

    2008-01-01

    Two important databases are often used in HIV genetic research, the HIV Sequence Database in Los Alamos, which collects all sequences and focuses on annotation and data analysis, and the HIV RT/Protease Sequence Database in Stanford, which collects sequences associated with the development of viral resistance against anti-retroviral drugs and focuses on analysis of those sequences. The types of data and services these two databases offer, the tools they provide, and the way they are set up and operated are described in detail. PMID:12875108

  11. Common hyperspectral image database design

    NASA Astrophysics Data System (ADS)

    Tian, Lixun; Liao, Ningfang; Chai, Ali

    2009-11-01

    This paper is to introduce Common hyperspectral image database with a demand-oriented Database design method (CHIDB), which comprehensively set ground-based spectra, standardized hyperspectral cube, spectral analysis together to meet some applications. The paper presents an integrated approach to retrieving spectral and spatial patterns from remotely sensed imagery using state-of-the-art data mining and advanced database technologies, some data mining ideas and functions were associated into CHIDB to make it more suitable to serve in agriculture, geological and environmental areas. A broad range of data from multiple regions of the electromagnetic spectrum is supported, including ultraviolet, visible, near-infrared, thermal infrared, and fluorescence. CHIDB is based on dotnet framework and designed by MVC architecture including five main functional modules: Data importer/exporter, Image/spectrum Viewer, Data Processor, Parameter Extractor, and On-line Analyzer. The original data were all stored in SQL server2008 for efficient search, query and update, and some advance Spectral image data Processing technology are used such as Parallel processing in C#; Finally an application case is presented in agricultural disease detecting area.

  12. Consumer Product Category Database

    EPA Pesticide Factsheets

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  13. BioImaging Database

    SciTech Connect

    David Nix, Lisa Simirenko

    2006-10-25

    The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.

  14. Biological Macromolecule Crystallization Database

    National Institute of Standards and Technology Data Gateway

    SRD 21 Biological Macromolecule Crystallization Database (Web, free access)   The Biological Macromolecule Crystallization Database and NASA Archive for Protein Crystal Growth Data (BMCD) contains the conditions reported for the crystallization of proteins and nucleic acids used in X-ray structure determinations and archives the results of microgravity macromolecule crystallization studies.

  15. Online Database Searching Workbook.

    ERIC Educational Resources Information Center

    Littlejohn, Alice C.; Parker, Joan M.

    Designed primarily for use by first-time searchers, this workbook provides an overview of online searching. Following a brief introduction which defines online searching, databases, and database producers, five steps in carrying out a successful search are described: (1) identifying the main concepts of the search statement; (2) selecting a…

  16. HIV Structural Database

    National Institute of Standards and Technology Data Gateway

    SRD 102 HIV Structural Database (Web, free access)   The HIV Protease Structural Database is an archive of experimentally determined 3-D structures of Human Immunodeficiency Virus 1 (HIV-1), Human Immunodeficiency Virus 2 (HIV-2) and Simian Immunodeficiency Virus (SIV) Proteases and their complexes with inhibitors or products of substrate cleavage.

  17. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  18. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  19. Morchella MLST database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Welcome to the Morchella MLST database. This dedicated database was set up at the CBS-KNAW Biodiversity Center by Vincent Robert in February 2012, using BioloMICS software (Robert et al., 2011), to facilitate DNA sequence-based identifications of Morchella species via the Internet. The current datab...

  20. A Quality System Database

    NASA Technical Reports Server (NTRS)

    Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William

    2010-01-01

    A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.

  1. Knowledge Discovery in Databases.

    ERIC Educational Resources Information Center

    Norton, M. Jay

    1999-01-01

    Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…

  2. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  3. Database Reviews: Legal Information.

    ERIC Educational Resources Information Center

    Seiser, Virginia

    Detailed reviews of two legal information databases--"Laborlaw I" and "Legal Resource Index"--are presented in this paper. Each database review begins with a bibliographic entry listing the title; producer; vendor; cost per hour contact time; offline print cost per citation; time period covered; frequency of updates; and size…

  4. Evolution of Database Replication Technologies for WLCG

    NASA Astrophysics Data System (ADS)

    Baranowski, Zbigniew; Lobato Pardavila, Lorena; Blaszczyk, Marcin; Dimitrov, Gancho; Canali, Luca

    2015-12-01

    In this article we summarize several years of experience on database replication technologies used at WLCG and we provide a short review of the available Oracle technologies and their key characteristics. One of the notable changes and improvement in this area in recent past has been the introduction of Oracle GoldenGate as a replacement of Oracle Streams. We report in this article on the preparation and later upgrades for remote replication done in collaboration with ATLAS and Tier 1 database administrators, including the experience from running Oracle GoldenGate in production. Moreover, we report on another key technology in this area: Oracle Active Data Guard which has been adopted in several of the mission critical use cases for database replication between online and offline databases for the LHC experiments.

  5. The world bacterial biogeography and biodiversity through databases: a case study of NCBI Nucleotide Database and GBIF Database.

    PubMed

    Selama, Okba; James, Phillip; Nateche, Farida; Wellington, Elizabeth M H; Hacène, Hocine

    2013-01-01

    Databases are an essential tool and resource within the field of bioinformatics. The primary aim of this study was to generate an overview of global bacterial biodiversity and biogeography using available data from the two largest public online databases, NCBI Nucleotide and GBIF. The secondary aim was to highlight the contribution each geographic area has to each database. The basis for data analysis of this study was the metadata provided by both databases, mainly, the taxonomy and the geographical area origin of isolation of the microorganism (record). These were directly obtained from GBIF through the online interface, while E-utilities and Python were used in combination with a programmatic web service access to obtain data from the NCBI Nucleotide Database. Results indicate that the American continent, and more specifically the USA, is the top contributor, while Africa and Antarctica are less well represented. This highlights the imbalance of exploration within these areas rather than any reduction in biodiversity. This study describes a novel approach to generating global scale patterns of bacterial biodiversity and biogeography and indicates that the Proteobacteria are the most abundant and widely distributed phylum within both databases.

  6. The NMT-5 criticality database

    SciTech Connect

    Cort, B.; Perkins, B.; Cort, G.

    1995-05-01

    The NMT-5 Criticality Database maintains criticality-related data and documentation to ensure the safety of workers handling special nuclear materials at the Plutonium Facility (TA-55) at Los Alamos National Laboratory. The database contains pertinent criticality safety limit information for more than 150 separate locations at which special nuclear materials are handled. Written in 4th Dimension for the Macintosh, it facilitates the production of signs for posting at these areas, tracks the history of postings and related authorizing documentation, and generates in Microsoft Word a current, comprehensive representation of all signs and supporting documentation, such as standard operating procedures and signature approvals. It facilitates the auditing process and is crucial to full and effective compliance with Department of Energy regulations. It has been recommended for installation throughout the Nuclear Materials Technology Division at Los Alamos.

  7. Comparison of Savannah River Site`s meteorological databases

    SciTech Connect

    Weber, A.H.

    1993-07-01

    A five-year meteorological database from the 61-meter, H-Area tower for the period 1987--1991 was compared to an earlier database for the period 1982--1986. The amount of invalid data for the newer 87--91 database was one third that for the earlier database. The data recovery percentage for the last four years of the 87-91 database was well above 90%. Considerable effort was necessary to fill in for missing data periods for the newer database for the H-Area tower. Therefore, additional databases that have been prepared for the remaining SRS meteorological towers have had missing and erroneous data flagged, but not replaced. The F-Area tower`s database was used for cross-comparison purposes because of its proximity to H Area. The primary purpose of this report is to compare the H-Tower databases for 82-86 and 87-91. Statistical methods enable the use of probability statements to be made concerning the hypothesis of no differences between the distributions of the two time periods, assuming each database is a random sample from its respective distribution. This assumption is required for the statistical tests to be valid. A number of statistical comparisons can be made between the two data sets, even though the 82-86 database exist only as distributions of frequency and mean speed.

  8. National Database of Geriatrics

    PubMed Central

    Kannegaard, Pia Nimann; Vinding, Kirsten L; Hare-Bruun, Helle

    2016-01-01

    Aim of database The aim of the National Database of Geriatrics is to monitor the quality of interdisciplinary diagnostics and treatment of patients admitted to a geriatric hospital unit. Study population The database population consists of patients who were admitted to a geriatric hospital unit. Geriatric patients cannot be defined by specific diagnoses. A geriatric patient is typically a frail multimorbid elderly patient with decreasing functional ability and social challenges. The database includes 14–15,000 admissions per year, and the database completeness has been stable at 90% during the past 5 years. Main variables An important part of the geriatric approach is the interdisciplinary collaboration. Indicators, therefore, reflect the combined efforts directed toward the geriatric patient. The indicators include Barthel index, body mass index, de Morton Mobility Index, Chair Stand, percentage of discharges with a rehabilitation plan, and the part of cases where an interdisciplinary conference has taken place. Data are recorded by doctors, nurses, and therapists in a database and linked to the Danish National Patient Register. Descriptive data Descriptive patient-related data include information about home, mobility aid, need of fall and/or cognitive diagnosing, and categorization of cause (general geriatric, orthogeriatric, or neurogeriatric). Conclusion The National Database of Geriatrics covers ∼90% of geriatric admissions in Danish hospitals and provides valuable information about a large and increasing patient population in the health care system. PMID:27822120

  9. Hazard Analysis Database Report

    SciTech Connect

    GRAMS, W.H.

    2000-12-28

    The Hazard Analysis Database was developed in conjunction with the hazard analysis activities conducted in accordance with DOE-STD-3009-94, Preparation Guide for U S . Department of Energy Nonreactor Nuclear Facility Safety Analysis Reports, for HNF-SD-WM-SAR-067, Tank Farms Final Safety Analysis Report (FSAR). The FSAR is part of the approved Authorization Basis (AB) for the River Protection Project (RPP). This document describes, identifies, and defines the contents and structure of the Tank Farms FSAR Hazard Analysis Database and documents the configuration control changes made to the database. The Hazard Analysis Database contains the collection of information generated during the initial hazard evaluations and the subsequent hazard and accident analysis activities. The Hazard Analysis Database supports the preparation of Chapters 3 ,4 , and 5 of the Tank Farms FSAR and the Unreviewed Safety Question (USQ) process and consists of two major, interrelated data sets: (1) Hazard Analysis Database: Data from the results of the hazard evaluations, and (2) Hazard Topography Database: Data from the system familiarization and hazard identification.

  10. Glycoproteomic and glycomic databases.

    PubMed

    Baycin Hizal, Deniz; Wolozny, Daniel; Colao, Joseph; Jacobson, Elena; Tian, Yuan; Krag, Sharon S; Betenbaugh, Michael J; Zhang, Hui

    2014-01-01

    Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.

  11. An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nicholas; Sellis, Timos

    1994-01-01

    We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.

  12. Phase Equilibria Diagrams Database

    National Institute of Standards and Technology Data Gateway

    SRD 31 NIST/ACerS Phase Equilibria Diagrams Database (PC database for purchase)   The Phase Equilibria Diagrams Database contains commentaries and more than 21,000 diagrams for non-organic systems, including those published in all 21 hard-copy volumes produced as part of the ACerS-NIST Phase Equilibria Diagrams Program (formerly titled Phase Diagrams for Ceramists): Volumes I through XIV (blue books); Annuals 91, 92, 93; High Tc Superconductors I & II; Zirconium & Zirconia Systems; and Electronic Ceramics I. Materials covered include oxides as well as non-oxide systems such as chalcogenides and pnictides, phosphates, salt systems, and mixed systems of these classes.

  13. JICST Factual Database

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuaki; Shimura, Kazuki; Monma, Yoshio; Sakamoto, Masao; Morishita, Hiroshi; Kanazawa, Kenji

    The Japan Information Center of Science and Technology (JICST) has started the on-line service of JICST/NRIM Materials Strength Database for Engineering Steels and Alloys (JICST ME) in this March (1990). This database has been developed under the joint research between JICST and the National Research Institute for Metals (NRIM). It provides material strength data (creep, fatigue, etc.) of engineering steels and alloys. It is able to search and display on-line, and to analyze the searched data statistically and plot the result on graphic display. The database system and the data in JICST ME are described.

  14. Plant Genome Duplication Database.

    PubMed

    Lee, Tae-Ho; Kim, Junah; Robertson, Jon S; Paterson, Andrew H

    2017-01-01

    Genome duplication, widespread in flowering plants, is a driving force in evolution. Genome alignments between/within genomes facilitate identification of homologous regions and individual genes to investigate evolutionary consequences of genome duplication. PGDD (the Plant Genome Duplication Database), a public web service database, provides intra- or interplant genome alignment information. At present, PGDD contains information for 47 plants whose genome sequences have been released. Here, we describe methods for identification and estimation of dates of genome duplication and speciation by functions of PGDD.The database is freely available at http://chibba.agtec.uga.edu/duplication/.

  15. Numeric Databases in the Sciences.

    ERIC Educational Resources Information Center

    Meschel, S. V.

    1984-01-01

    Provides exploration into types of numeric databases available (also known as source databases, nonbibliographic databases, data-files, data-banks, fact banks); examines differences and similarities between bibliographic and numeric databases; identifies disciplines that utilize numeric databases; and surveys representative examples in the…

  16. THE CTEPP DATABASE

    EPA Science Inventory

    The CTEPP (Children's Total Exposure to Persistent Pesticides and Other Persistent Organic Pollutants) database contains a wealth of data on children's aggregate exposures to pollutants in their everyday surroundings. Chemical analysis data for the environmental media and ques...

  17. Chemical Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 17 NIST Chemical Kinetics Database (Web, free access)   The NIST Chemical Kinetics Database includes essentially all reported kinetics results for thermal gas-phase chemical reactions. The database is designed to be searched for kinetics data based on the specific reactants involved, for reactions resulting in specified products, for all the reactions of a particular species, or for various combinations of these. In addition, the bibliography can be searched by author name or combination of names. The database contains in excess of 38,000 separate reaction records for over 11,700 distinct reactant pairs. These data have been abstracted from over 12,000 papers with literature coverage through early 2000.

  18. Hawaii bibliographic database

    NASA Astrophysics Data System (ADS)

    Wright, Thomas L.; Takahashi, Taeko Jane

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and s or (if no ) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  19. Enhancing medical database security.

    PubMed

    Pangalos, G; Khair, M; Bozios, L

    1994-08-01

    A methodology for the enhancement of database security in a hospital environment is presented in this paper which is based on both the discretionary and the mandatory database security policies. In this way the advantages of both approaches are combined to enhance medical database security. An appropriate classification of the different types of users according to their different needs and roles and a User Role Definition Hierarchy has been used. The experience obtained from the experimental implementation of the proposed methodology in a major general hospital is briefly discussed. The implementation has shown that the combined discretionary and mandatory security enforcement effectively limits the unauthorized access to the medical database, without severely restricting the capabilities of the system.

  20. Uranium Location Database Compilation

    EPA Pesticide Factsheets

    EPA has compiled mine location information from federal, state, and Tribal agencies into a single database as part of its investigation into the potential environmental hazards of wastes from abandoned uranium mines in the western United States.

  1. Livestock Anaerobic Digester Database

    EPA Pesticide Factsheets

    The Anaerobic Digester Database provides basic information about anaerobic digesters on livestock farms in the United States, organized in Excel spreadsheets. It includes projects that are under construction, operating, or shut down.

  2. Hawaii bibliographic database

    USGS Publications Warehouse

    Wright, T.L.; Takahashi, T.J.

    1998-01-01

    The Hawaii bibliographic database has been created to contain all of the literature, from 1779 to the present, pertinent to the volcanological history of the Hawaiian-Emperor volcanic chain. References are entered in a PC- and Macintosh-compatible EndNote Plus bibliographic database with keywords and abstracts or (if no abstract) with annotations as to content. Keywords emphasize location, discipline, process, identification of new chemical data or age determinations, and type of publication. The database is updated approximately three times a year and is available to upload from an ftp site. The bibliography contained 8460 references at the time this paper was submitted for publication. Use of the database greatly enhances the power and completeness of library searches for anyone interested in Hawaiian volcanism.

  3. Nuclear Science References Database

    SciTech Connect

    Pritychenko, B.; Běták, E.; Singh, B.; Totans, J.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energy Agency (http://www-nds.iaea.org/nsr)

  4. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1994-05-27

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  5. Querying genomic databases

    SciTech Connect

    Baehr, A.; Hagstrom, R.; Joerg, D.; Overbeek, R.

    1991-09-01

    A natural-language interface has been developed that retrieves genomic information by using a simple subset of English. The interface spares the biologist from the task of learning database-specific query languages and computer programming. Currently, the interface deals with the E. coli genome. It can, however, be readily extended and shows promise as a means of easy access to other sequenced genomic databases as well.

  6. Database computing in HEP

    SciTech Connect

    Day, C.T.; Loken, S.; MacFarlane, J.F. ); May, E.; Lifka, D.; Lusk, E.; Price, L.E. ); Baden, A. . Dept. of Physics); Grossman, R.; Qin, X. . Dept. of Mathematics, Statistics and Computer Science); Cormell, L.; Leibold, P.; Liu, D

    1992-01-01

    The major SSC experiments are expected to produce up to 1 Petabyte of data per year each. Once the primary reconstruction is completed by farms of inexpensive processors. I/O becomes a major factor in further analysis of the data. We believe that the application of database techniques can significantly reduce the I/O performed in these analyses. We present examples of such I/O reductions in prototype based on relational and object-oriented databases of CDF data samples.

  7. Human mapping databases.

    PubMed

    Talbot, C; Cuticchia, A J

    2001-05-01

    This unit concentrates on the data contained within two human genome databasesGDB (Genome Database) and OMIM (Online Mendelian Inheritance in Man)and includes discussion of different methods for submitting and accessing data. An understanding of electronic mail, FTP, and the use of a World Wide Web (WWW) navigational tool such as Netscape or Internet Explorer is a prerequisite for utilizing the information in this unit.

  8. Steam Properties Database

    National Institute of Standards and Technology Data Gateway

    SRD 10 NIST/ASME Steam Properties Database (PC database for purchase)   Based upon the International Association for the Properties of Water and Steam (IAPWS) 1995 formulation for the thermodynamic properties of water and the most recent IAPWS formulations for transport and other properties, this updated version provides water properties over a wide range of conditions according to the accepted international standards.

  9. The comprehensive peptaibiotics database.

    PubMed

    Stoppacher, Norbert; Neumann, Nora K N; Burgstaller, Lukas; Zeilinger, Susanne; Degenkolb, Thomas; Brückner, Hans; Schuhmacher, Rainer

    2013-05-01

    Peptaibiotics are nonribosomally biosynthesized peptides, which - according to definition - contain the marker amino acid α-aminoisobutyric acid (Aib) and possess antibiotic properties. Being known since 1958, a constantly increasing number of peptaibiotics have been described and investigated with a particular emphasis on hypocrealean fungi. Starting from the existing online 'Peptaibol Database', first published in 1997, an exhaustive literature survey of all known peptaibiotics was carried out and resulted in a list of 1043 peptaibiotics. The gathered information was compiled and used to create the new 'The Comprehensive Peptaibiotics Database', which is presented here. The database was devised as a software tool based on Microsoft (MS) Access. It is freely available from the internet at http://peptaibiotics-database.boku.ac.at and can easily be installed and operated on any computer offering a Windows XP/7 environment. It provides useful information on characteristic properties of the peptaibiotics included such as peptide category, group name of the microheterogeneous mixture to which the peptide belongs, amino acid sequence, sequence length, producing fungus, peptide subfamily, molecular formula, and monoisotopic mass. All these characteristics can be used and combined for automated search within the database, which makes The Comprehensive Peptaibiotics Database a versatile tool for the retrieval of valuable information about peptaibiotics. Sequence data have been considered as to December 14, 2012.

  10. Drinking Water Database

    NASA Technical Reports Server (NTRS)

    Murray, ShaTerea R.

    2004-01-01

    This summer I had the opportunity to work in the Environmental Management Office (EMO) under the Chemical Sampling and Analysis Team or CS&AT. This team s mission is to support Glenn Research Center (GRC) and EM0 by providing chemical sampling and analysis services and expert consulting. Services include sampling and chemical analysis of water, soil, fbels, oils, paint, insulation materials, etc. One of this team s major projects is the Drinking Water Project. This is a project that is done on Glenn s water coolers and ten percent of its sink every two years. For the past two summers an intern had been putting together a database for this team to record the test they had perform. She had successfully created a database but hadn't worked out all the quirks. So this summer William Wilder (an intern from Cleveland State University) and I worked together to perfect her database. We began be finding out exactly what every member of the team thought about the database and what they would change if any. After collecting this data we both had to take some courses in Microsoft Access in order to fix the problems. Next we began looking at what exactly how the database worked from the outside inward. Then we began trying to change the database but we quickly found out that this would be virtually impossible.

  11. The Transporter Classification Database

    PubMed Central

    Saier, Milton H.; Reddy, Vamsee S.; Tamang, Dorjee G.; Västermark, Åke

    2014-01-01

    The Transporter Classification Database (TCDB; http://www.tcdb.org) serves as a common reference point for transport protein research. The database contains more than 10 000 non-redundant proteins that represent all currently recognized families of transmembrane molecular transport systems. Proteins in TCDB are organized in a five level hierarchical system, where the first two levels are the class and subclass, the second two are the family and subfamily, and the last one is the transport system. Superfamilies that contain multiple families are included as hyperlinks to the five tier TC hierarchy. TCDB includes proteins from all types of living organisms and is the only transporter classification system that is both universal and recognized by the International Union of Biochemistry and Molecular Biology. It has been expanded by manual curation, contains extensive text descriptions providing structural, functional, mechanistic and evolutionary information, is supported by unique software and is interconnected to many other relevant databases. TCDB is of increasing usefulness to the international scientific community and can serve as a model for the expansion of database technologies. This manuscript describes an update of the database descriptions previously featured in NAR database issues. PMID:24225317

  12. Specialist Bibliographic Databases.

    PubMed

    Gasparyan, Armen Yuri; Yessirkepov, Marlen; Voronov, Alexander A; Trukhachev, Vladimir I; Kostyukova, Elena I; Gerasimov, Alexey N; Kitas, George D

    2016-05-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls.

  13. Specialist Bibliographic Databases

    PubMed Central

    2016-01-01

    Specialist bibliographic databases offer essential online tools for researchers and authors who work on specific subjects and perform comprehensive and systematic syntheses of evidence. This article presents examples of the established specialist databases, which may be of interest to those engaged in multidisciplinary science communication. Access to most specialist databases is through subscription schemes and membership in professional associations. Several aggregators of information and database vendors, such as EBSCOhost and ProQuest, facilitate advanced searches supported by specialist keyword thesauri. Searches of items through specialist databases are complementary to those through multidisciplinary research platforms, such as PubMed, Web of Science, and Google Scholar. Familiarizing with the functional characteristics of biomedical and nonbiomedical bibliographic search tools is mandatory for researchers, authors, editors, and publishers. The database users are offered updates of the indexed journal lists, abstracts, author profiles, and links to other metadata. Editors and publishers may find particularly useful source selection criteria and apply for coverage of their peer-reviewed journals and grey literature sources. These criteria are aimed at accepting relevant sources with established editorial policies and quality controls. PMID:27134485

  14. Crude Oil Analysis Database

    DOE Data Explorer

    Shay, Johanna Y.

    The composition and physical properties of crude oil vary widely from one reservoir to another within an oil field, as well as from one field or region to another. Although all oils consist of hydrocarbons and their derivatives, the proportions of various types of compounds differ greatly. This makes some oils more suitable than others for specific refining processes and uses. To take advantage of this diversity, one needs access to information in a large database of crude oil analyses. The Crude Oil Analysis Database (COADB) currently satisfies this need by offering 9,056 crude oil analyses. Of these, 8,500 are United States domestic oils. The database contains results of analysis of the general properties and chemical composition, as well as the field, formation, and geographic location of the crude oil sample. [Taken from the Introduction to COAMDATA_DESC.pdf, part of the zipped software and database file at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the zipped file to your PC. When opened, it will contain PDF documents and a large Excel spreadsheet. It will also contain the database in Microsoft Access 2002.

  15. Databases: Peter's Picks and Pans.

    ERIC Educational Resources Information Center

    Jacso, Peter

    1995-01-01

    Reviews the best and worst in databases on disk, CD-ROM, and online, and offers judgments and observations on database characteristics. Two databases are praised and three are criticized. (Author/JMV)

  16. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  17. NASA Records Database

    NASA Technical Reports Server (NTRS)

    Callac, Christopher; Lunsford, Michelle

    2005-01-01

    The NASA Records Database, comprising a Web-based application program and a database, is used to administer an archive of paper records at Stennis Space Center. The system begins with an electronic form, into which a user enters information about records that the user is sending to the archive. The form is smart : it provides instructions for entering information correctly and prompts the user to enter all required information. Once complete, the form is digitally signed and submitted to the database. The system determines which storage locations are not in use, assigns the user s boxes of records to some of them, and enters these assignments in the database. Thereafter, the software tracks the boxes and can be used to locate them. By use of search capabilities of the software, specific records can be sought by box storage locations, accession numbers, record dates, submitting organizations, or details of the records themselves. Boxes can be marked with such statuses as checked out, lost, transferred, and destroyed. The system can generate reports showing boxes awaiting destruction or transfer. When boxes are transferred to the National Archives and Records Administration (NARA), the system can automatically fill out NARA records-transfer forms. Currently, several other NASA Centers are considering deploying the NASA Records Database to help automate their records archives.

  18. ADANS database specification

    SciTech Connect

    1997-01-16

    The purpose of the Air Mobility Command (AMC) Deployment Analysis System (ADANS) Database Specification (DS) is to describe the database organization and storage allocation and to provide the detailed data model of the physical design and information necessary for the construction of the parts of the database (e.g., tables, indexes, rules, defaults). The DS includes entity relationship diagrams, table and field definitions, reports on other database objects, and a description of the ADANS data dictionary. ADANS is the automated system used by Headquarters AMC and the Tanker Airlift Control Center (TACC) for airlift planning and scheduling of peacetime and contingency operations as well as for deliberate planning. ADANS also supports planning and scheduling of Air Refueling Events by the TACC and the unit-level tanker schedulers. ADANS receives input in the form of movement requirements and air refueling requests. It provides a suite of tools for planners to manipulate these requirements/requests against mobility assets and to develop, analyze, and distribute schedules. Analysis tools are provided for assessing the products of the scheduling subsystems, and editing capabilities support the refinement of schedules. A reporting capability provides formatted screen, print, and/or file outputs of various standard reports. An interface subsystem handles message traffic to and from external systems. The database is an integral part of the functionality summarized above.

  19. The Chandra Bibliography Database

    NASA Astrophysics Data System (ADS)

    Rots, A. H.; Winkelman, S. L.; Paltani, S.; Blecksmith, S. E.; Bright, J. D.

    2004-07-01

    Early in the mission, the Chandra Data Archive started the development of a bibliography database, tracking publications in refereed journals and on-line conference proceedings that are based on Chandra observations, allowing our users to link directly to articles in the ADS from our archive, and to link to the relevant data in the archive from the ADS entries. Subsequently, we have been working closely with the ADS and other data centers, in the context of the ADEC-ITWG, on standardizing the literature-data linking. We have also extended our bibliography database to include all Chandra-related articles and we are also keeping track of the number of citations of each paper. Obviously, in addition to providing valuable services to our users, this database allows us to extract a wide variety of statistical information. The project comprises five components: the bibliography database-proper, a maintenance database, an interactive maintenance tool, a user browsing interface, and a web services component for exchanging information with the ADS. All of these elements are nearly mission-independent and we intend make the package as a whole available for use by other data centers. The capabilities thus provided represent support for an essential component of the Virtual Observatory.

  20. FishTraits Database

    USGS Publications Warehouse

    Angermeier, Paul L.; Frimpong, Emmanuel A.

    2009-01-01

    The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.

  1. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James L.; Christiansen, Eric L.; Lear, Dana M.

    2011-01-01

    With three missions outstanding, the Shuttle Hypervelocity Impact Database has nearly 3000 entries. The data is divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. Details and insights on the contents of the database including examples of descriptive statistics will be provided. Post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will also be discussed. Potential enhancements to the database structure and availability of the data for other researchers will be addressed in the Future Work section. A related database of returned surfaces from the International Space Station will also be introduced.

  2. Shuttle Hypervelocity Impact Database

    NASA Technical Reports Server (NTRS)

    Hyde, James I.; Christiansen, Eric I.; Lear, Dana M.

    2011-01-01

    With three flights remaining on the manifest, the shuttle impact hypervelocity database has over 2800 entries. The data is currently divided into tables for crew module windows, payload bay door radiators and thermal protection system regions, with window impacts compromising just over half the records. In general, the database provides dimensions of hypervelocity impact damage, a component level location (i.e., window number or radiator panel number) and the orbiter mission when the impact occurred. Additional detail on the type of particle that produced the damage site is provided when sampling data and definitive analysis results are available. The paper will provide details and insights on the contents of the database including examples of descriptive statistics using the impact data. A discussion of post flight impact damage inspection and sampling techniques that were employed during the different observation campaigns will be presented. Future work to be discussed will be possible enhancements to the database structure and availability of the data for other researchers. A related database of ISS returned surfaces that are under development will also be introduced.

  3. Patent Databases. . .A Survey of What Is Available from DIALOG, Questel, SDC, Pergamon and INPADOC.

    ERIC Educational Resources Information Center

    Kulp, Carol S.

    1984-01-01

    Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…

  4. VIEWCACHE: An incremental database access method for autonomous interoperable databases

    NASA Technical Reports Server (NTRS)

    Roussopoulos, Nick; Sellis, Timoleon

    1991-01-01

    The objective is to illustrate the concept of incremental access to distributed databases. An experimental database management system, ADMS, which has been developed at the University of Maryland, in College Park, uses VIEWCACHE, a database access method based on incremental search. VIEWCACHE is a pointer-based access method that provides a uniform interface for accessing distributed databases and catalogues. The compactness of the pointer structures formed during database browsing and the incremental access method allow the user to search and do inter-database cross-referencing with no actual data movement between database sites. Once the search is complete, the set of collected pointers pointing to the desired data are dereferenced.

  5. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on R-32, R-123, R-124, R- 125, R-134a, R-141b, R142b, R-143a, R-152a, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses polyalkylene glycol (PAG), ester, and other lubricants. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits.

  6. The PROSITE database

    PubMed Central

    Hulo, Nicolas; Bairoch, Amos; Bulliard, Virginie; Cerutti, Lorenzo; De Castro, Edouard; Langendijk-Genevaux, Petra S.; Pagni, Marco; Sigrist, Christian J. A.

    2006-01-01

    The PROSITE database consists of a large collection of biologically meaningful signatures that are described as patterns or profiles. Each signature is linked to a documentation that provides useful biological information on the protein family, domain or functional site identified by the signature. The PROSITE database is now complemented by a series of rules that can give more precise information about specific residues. During the last 2 years, the documentation and the ScanProsite web pages were redesigned to add more functionalities. The latest version of PROSITE (release 19.11 of September 27, 2005) contains 1329 patterns and 552 profile entries. Over the past 2 years more than 200 domains have been added, and now 52% of UniProtKB/Swiss-Prot entries (release 48.1 of September 27, 2005) have a cross-reference to a PROSITE entry. The database is accessible at . PMID:16381852

  7. Mouse genome database 2016

    PubMed Central

    Bult, Carol J.; Eppig, Janan T.; Blake, Judith A.; Kadin, James A.; Richardson, Joel E.

    2016-01-01

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data. PMID:26578600

  8. Mouse genome database 2016.

    PubMed

    Bult, Carol J; Eppig, Janan T; Blake, Judith A; Kadin, James A; Richardson, Joel E

    2016-01-04

    The Mouse Genome Database (MGD; http://www.informatics.jax.org) is the primary community model organism database for the laboratory mouse and serves as the source for key biological reference data related to mouse genes, gene functions, phenotypes and disease models with a strong emphasis on the relationship of these data to human biology and disease. As the cost of genome-scale sequencing continues to decrease and new technologies for genome editing become widely adopted, the laboratory mouse is more important than ever as a model system for understanding the biological significance of human genetic variation and for advancing the basic research needed to support the emergence of genome-guided precision medicine. Recent enhancements to MGD include new graphical summaries of biological annotations for mouse genes, support for mobile access to the database, tools to support the annotation and analysis of sets of genes, and expanded support for comparative biology through the expansion of homology data.

  9. Enhancing medical database semantics.

    PubMed Central

    Leão, B. de F.; Pavan, A.

    1995-01-01

    Medical Databases deal with dynamic, heterogeneous and fuzzy data. The modeling of such complex domain demands powerful semantic data modeling methodologies. This paper describes GSM-Explorer a Case Tool that allows for the creation of relational databases using semantic data modeling techniques. GSM Explorer fully incorporates the Generic Semantic Data Model-GSM enabling knowledge engineers to model the application domain with the abstraction mechanisms of generalization/specialization, association and aggregation. The tool generates a structure that implements persistent database-objects through the automatic generation of customized SQL ANSI scripts that sustain the semantics defined in the higher lever. This paper emphasizes the system architecture and the mapping of the semantic model into relational tables. The present status of the project and its further developments are discussed in the Conclusions. PMID:8563288

  10. National Ambient Radiation Database

    SciTech Connect

    Dziuban, J.; Sears, R.

    2003-02-25

    The U.S. Environmental Protection Agency (EPA) recently developed a searchable database and website for the Environmental Radiation Ambient Monitoring System (ERAMS) data. This site contains nationwide radiation monitoring data for air particulates, precipitation, drinking water, surface water and pasteurized milk. This site provides location-specific as well as national information on environmental radioactivity across several media. It provides high quality data for assessing public exposure and environmental impacts resulting from nuclear emergencies and provides baseline data during routine conditions. The database and website are accessible at www.epa.gov/enviro/. This site contains (1) a query for the general public which is easy to use--limits the amount of information provided, but includes the ability to graph the data with risk benchmarks and (2) a query for a more technical user which allows access to all of the data in the database, (3) background information on ER AMS.

  11. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community

  12. A Computational Chemistry Database for Semiconductor Processing

    NASA Technical Reports Server (NTRS)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  13. Armenian Astronomical Archives and Databases

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.; Astsatryan, H. V.; Knyazyan, A. V.; Mikayelyan, G. A.

    2017-01-01

    interactive sky map and scientific usage of this material. The Armenian Virtual Observatory (ArVO) is based on the DFBS database and other large-area surveys and catalogue data, as well as the data coming from other BAO digitization projects.

  14. Database Management System

    NASA Technical Reports Server (NTRS)

    1990-01-01

    In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.

  15. JICST Factual Database(1)

    NASA Astrophysics Data System (ADS)

    Kurosawa, Shinji

    The outline of JICST factual database (JOIS-F), which JICST has started from January, 1988, and its online service are described in this paper. First, the author mentions the circumstances from 1973, when its planning was started, to the present, and its relation to "Project by Special Coordination Founds for Promoting Science and Technology". Secondly, databases, which are now under development aiming to start its services from fiscal 1988 or fiscal 1989, of DNA, metallic material intensity, crystal structure, chemical substance regulations, and so forth, are described. Lastly, its online service is briefly explained.

  16. Drycleaner Database - Region 7

    EPA Pesticide Factsheets

    THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Drycleaner Database (R7DryClnDB) which tracks all Region7 drycleaners who notify Region 7 subject to Maximum Achievable Control Technologiy (MACT) standards. The Air and Waste Management Division is the primary managing entity for this database. This work falls under objectives for EPA's 2003-2008 Strategic Plan (Goal 4) for Healthy Communities & Ecosystems, which are to reduce chemical and/or pesticide risks at facilities.

  17. The Genopolis Microarray Database

    PubMed Central

    Splendiani, Andrea; Brandizi, Marco; Even, Gael; Beretta, Ottavio; Pavelka, Norman; Pelizzola, Mattia; Mayhaus, Manuel; Foti, Maria; Mauri, Giancarlo; Ricciardi-Castagnoli, Paola

    2007-01-01

    Background Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood. Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions. Results The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip® platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users. Conclusion The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local

  18. Databases for plant phosphoproteomics.

    PubMed

    Schulze, Waltraud X; Yao, Qiuming; Xu, Dong

    2015-01-01

    Phosphorylation is the most studied posttranslational modification involved in signal transduction in stress responses, development, and growth. In the recent years large-scale phosphoproteomic studies were carried out using various model plants and several growth and stress conditions. Here we present an overview of online resources for plant phosphoproteomic databases: PhosPhAt as a resource for Arabidopsis phosphoproteins, P3DB as a resource expanding to crop plants, and Medicago PhosphoProtein Database as a resource for the model plant Medicago trunculata.

  19. NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL

    EPA Science Inventory

    Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...

  20. Bibliographic Databases Outside of the United States.

    ERIC Educational Resources Information Center

    McGinn, Thomas P.; And Others

    1988-01-01

    Eight articles describe the development, content, and structure of databases outside of the United States. Features discussed include library involvement, authority control, shared cataloging services, union catalogs, thesauri, abstracts, and distribution methods. Countries and areas represented are Latin America, Australia, the United Kingdom,…

  1. First Look--The Biobusiness Database.

    ERIC Educational Resources Information Center

    Cunningham, Ann Marie

    1986-01-01

    Presents overview prepared by producer of database newly available in 1985 that covers six broad subject areas: genetic engineering and bioprocessing, pharmaceuticals, medical technology and instrumentation, agriculture, energy and environment, and food and beverages. Background, indexing, record format, use of BioBusiness, and 1986 enhancements…

  2. Proteomics: Protein Identification Using Online Databases

    ERIC Educational Resources Information Center

    Eurich, Chris; Fields, Peter A.; Rice, Elizabeth

    2012-01-01

    Proteomics is an emerging area of systems biology that allows simultaneous study of thousands of proteins expressed in cells, tissues, or whole organisms. We have developed this activity to enable high school or college students to explore proteomic databases using mass spectrometry data files generated from yeast proteins in a college laboratory…

  3. Survey of Machine Learning Methods for Database Security

    NASA Astrophysics Data System (ADS)

    Kamra, Ashish; Ber, Elisa

    Application of machine learning techniques to database security is an emerging area of research. In this chapter, we present a survey of various approaches that use machine learning/data mining techniques to enhance the traditional security mechanisms of databases. There are two key database security areas in which these techniques have found applications, namely, detection of SQL Injection attacks and anomaly detection for defending against insider threats. Apart from the research prototypes and tools, various third-party commercial products are also available that provide database activity monitoring solutions by profiling database users and applications. We present a survey of such products. We end the chapter with a primer on mechanisms for responding to database anomalies.

  4. Weathering Database Technology

    ERIC Educational Resources Information Center

    Snyder, Robert

    2005-01-01

    Collecting weather data is a traditional part of a meteorology unit at the middle level. However, making connections between the data and weather conditions can be a challenge. One way to make these connections clearer is to enter the data into a database. This allows students to quickly compare different fields of data and recognize which…

  5. Danish Gynecological Cancer Database

    PubMed Central

    Sørensen, Sarah Mejer; Bjørn, Signe Frahm; Jochumsen, Kirsten Marie; Jensen, Pernille Tine; Thranov, Ingrid Regitze; Hare-Bruun, Helle; Seibæk, Lene; Høgdall, Claus

    2016-01-01

    Aim of database The Danish Gynecological Cancer Database (DGCD) is a nationwide clinical cancer database and its aim is to monitor the treatment quality of Danish gynecological cancer patients, and to generate data for scientific purposes. DGCD also records detailed data on the diagnostic measures for gynecological cancer. Study population DGCD was initiated January 1, 2005, and includes all patients treated at Danish hospitals for cancer of the ovaries, peritoneum, fallopian tubes, cervix, vulva, vagina, and uterus, including rare histological types. Main variables DGCD data are organized within separate data forms as follows: clinical data, surgery, pathology, pre- and postoperative care, complications, follow-up visits, and final quality check. DGCD is linked with additional data from the Danish “Pathology Registry”, the “National Patient Registry”, and the “Cause of Death Registry” using the unique Danish personal identification number (CPR number). Descriptive data Data from DGCD and registers are available online in the Statistical Analysis Software portal. The DGCD forms cover almost all possible clinical variables used to describe gynecological cancer courses. The only limitation is the registration of oncological treatment data, which is incomplete for a large number of patients. Conclusion The very complete collection of available data from more registries form one of the unique strengths of DGCD compared to many other clinical databases, and provides unique possibilities for validation and completeness of data. The success of the DGCD is illustrated through annual reports, high coverage, and several peer-reviewed DGCD-based publications. PMID:27822089

  6. Uranium Location Database

    EPA Pesticide Factsheets

    A GIS compiled locational database in Microsoft Access of ~15,000 mines with uranium occurrence or production, primarily in the western United States. The metadata was cooperatively compiled from Federal and State agency data sets and enables the user to conduct geographic and analytical studies on mine impacts on the public and environment.

  7. The Exoplanet Orbit Database

    NASA Astrophysics Data System (ADS)

    Wright, J. T.; Fakhouri, O.; Marcy, G. W.; Han, E.; Feng, Y.; Johnson, John Asher; Howard, A. W.; Fischer, D. A.; Valenti, J. A.; Anderson, J.; Piskunov, N.

    2011-04-01

    We present a database of well-determined orbital parameters of exoplanets, and their host stars’ properties. This database comprises spectroscopic orbital elements measured for 427 planets orbiting 363 stars from radial velocity and transit measurements as reported in the literature. We have also compiled fundamental transit parameters, stellar parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets with robust, well measured orbital parameters reported in peer-reviewed articles. The database is available in a searchable, filterable, and sortable form online through the Exoplanets Data Explorer table, and the data can be plotted and explored through the Exoplanet Data Explorer plotter. We use the Data Explorer to generate publication-ready plots, giving three examples of the signatures of exoplanet migration and dynamical evolution: We illustrate the character of the apparent correlation between mass and period in exoplanet orbits, the different selection biases between radial velocity and transit surveys, and that the multiplanet systems show a distinct semimajor-axis distribution from apparently singleton systems.

  8. Patent Family Databases.

    ERIC Educational Resources Information Center

    Simmons, Edlyn S.

    1985-01-01

    Reports on retrieval of patent information online and includes definition of patent family, basic and equivalent patents, "parents and children" applications, designated states, patent family databases--International Patent Documentation Center, World Patents Index, APIPAT (American Petroleum Institute), CLAIMS (IFI/Plenum). A table…

  9. Diatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 114 Diatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.

  10. High Performance Buildings Database

    DOE Data Explorer

    The High Performance Buildings Database is a shared resource for the building industry, a unique central repository of in-depth information and data on high-performance, green building projects across the United States and abroad. The database includes information on the energy use, environmental performance, design process, finances, and other aspects of each project. Members of the design and construction teams are listed, as are sources for additional information. In total, up to twelve screens of detailed information are provided for each project profile. Projects range in size from small single-family homes or tenant fit-outs within buildings to large commercial and institutional buildings and even entire campuses. The database is a data repository as well. A series of Web-based data-entry templates allows anyone to enter information about a building project into the database. Once a project has been submitted, each of the partner organizations can review the entry and choose whether or not to publish that particular project on its own Web site.

  11. MARC and Relational Databases.

    ERIC Educational Resources Information Center

    Llorens, Jose; Trenor, Asuncion

    1993-01-01

    Discusses the use of MARC format in relational databases and addresses problems of incompatibilities. A solution is presented that is in accordance with Open Systems Interconnection (OSI) standards and is based on experiences at the library of the Universidad Politecnica de Valencia (Spain). (four references) (EA)

  12. Databases and data mining

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Over the course of the past decade, the breadth of information that is made available through online resources for plant biology has increased astronomically, as have the interconnectedness among databases, online tools, and methods of data acquisition and analysis. For maize researchers, the numbe...

  13. Hydrocarbon Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 115 Hydrocarbon Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  14. Danish Urogynaecological Database

    PubMed Central

    Hansen, Ulla Darling; Gradel, Kim Oren; Larsen, Michael Due

    2016-01-01

    The Danish Urogynaecological Database is established in order to ensure high quality of treatment for patients undergoing urogynecological surgery. The database contains details of all women in Denmark undergoing incontinence surgery or pelvic organ prolapse surgery amounting to ~5,200 procedures per year. The variables are collected along the course of treatment of the patient from the referral to a postoperative control. Main variables are prior obstetrical and gynecological history, symptoms, symptom-related quality of life, objective urogynecological findings, type of operation, complications if relevant, implants used if relevant, 3–6-month postoperative recording of symptoms, if any. A set of clinical quality indicators is being maintained by the steering committee for the database and is published in an annual report which also contains extensive descriptive statistics. The database has a completeness of over 90% of all urogynecological surgeries performed in Denmark. Some of the main variables have been validated using medical records as gold standard. The positive predictive value was above 90%. The data are used as a quality monitoring tool by the hospitals and in a number of scientific studies of specific urogynecological topics, broader epidemiological topics, and the use of patient reported outcome measures. PMID:27826217

  15. Food composition databases

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Food composition is the determination of what is in the foods we eat and is the critical bridge between nutrition, health promotion and disease prevention and food production. Compilation of data into useable databases is essential to the development of dietary guidance for individuals and populat...

  16. Redis database administration tool

    SciTech Connect

    Martinez, J. J.

    2013-02-13

    MyRedis is a product of the Lorenz subproject under the ASC Scirntific Data Management effort. MyRedis is a web based utility designed to allow easy administration of instances of Redis databases. It can be usedd to view and manipulate data as well as run commands directly against a variety of different Redis hosts.

  17. Triatomic Spectral Database

    National Institute of Standards and Technology Data Gateway

    SRD 117 Triatomic Spectral Database (Web, free access)   All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.

  18. JDD, Inc. Database

    NASA Technical Reports Server (NTRS)

    Miller, David A., Jr.

    2004-01-01

    JDD Inc, is a maintenance and custodial contracting company whose mission is to provide their clients in the private and government sectors "quality construction, construction management and cleaning services in the most efficient and cost effective manners, (JDD, Inc. Mission Statement)." This company provides facilities support for Fort Riley in Fo,rt Riley, Kansas and the NASA John H. Glenn Research Center at Lewis Field here in Cleveland, Ohio. JDD, Inc. is owned and operated by James Vaughn, who started as painter at NASA Glenn and has been working here for the past seventeen years. This summer I worked under Devan Anderson, who is the safety manager for JDD Inc. in the Logistics and Technical Information Division at Glenn Research Center The LTID provides all transportation, secretarial, security needs and contract management of these various services for the center. As a safety manager, my mentor provides Occupational Health and Safety Occupation (OSHA) compliance to all JDD, Inc. employees and handles all other issues (Environmental Protection Agency issues, workers compensation, safety and health training) involving to job safety. My summer assignment was not as considered "groundbreaking research" like many other summer interns have done in the past, but it is just as important and beneficial to JDD, Inc. I initially created a database using a Microsoft Excel program to classify and categorize data pertaining to numerous safety training certification courses instructed by our safety manager during the course of the fiscal year. This early portion of the database consisted of only data (training field index, employees who were present at these training courses and who was absent) from the training certification courses. Once I completed this phase of the database, I decided to expand the database and add as many dimensions to it as possible. Throughout the last seven weeks, I have been compiling more data from day to day operations and been adding the

  19. Tautomerism in large databases

    PubMed Central

    Sitzmann, Markus; Ihlenfeldt, Wolf-Dietrich

    2010-01-01

    We have used the Chemical Structure DataBase (CSDB) of the NCI CADD Group, an aggregated collection of over 150 small-molecule databases totaling 103.5 million structure records, to conduct tautomerism analyses on one of the largest currently existing sets of real (i.e. not computer-generated) compounds. This analysis was carried out using calculable chemical structure identifiers developed by the NCI CADD Group, based on hash codes available in the chemoinformatics toolkit CACTVS and a newly developed scoring scheme to define a canonical tautomer for any encountered structure. CACTVS’s tautomerism definition, a set of 21 transform rules expressed in SMIRKS line notation, was used, which takes a comprehensive stance as to the possible types of tautomeric interconversion included. Tautomerism was found to be possible for more than 2/3 of the unique structures in the CSDB. A total of 680 million tautomers were calculated from, and including, the original structure records. Tautomerism overlap within the same individual database (i.e. at least one other entry was present that was really only a different tautomeric representation of the same compound) was found at an average rate of 0.3% of the original structure records, with values as high as nearly 2% for some of the databases in CSDB. Projected onto the set of unique structures (by FICuS identifier), this still occurred in about 1.5% of the cases. Tautomeric overlap across all constituent databases in CSDB was found for nearly 10% of the records in the collection. PMID:20512400

  20. JICST Factual DatabaseJICST Chemical Substance Safety Regulation Database

    NASA Astrophysics Data System (ADS)

    Abe, Atsushi; Sohma, Tohru

    JICST Chemical Substance Safety Regulation Database is based on the Database of Safety Laws for Chemical Compounds constructed by Japan Chemical Industry Ecology-Toxicology & Information Center (JETOC) sponsored by the Sience and Technology Agency in 1987. JICST has modified JETOC database system, added data and started the online service through JOlS-F (JICST Online Information Service-Factual database) in January 1990. JICST database comprises eighty-three laws and fourteen hundred compounds. The authors outline the database, data items, files and search commands. An example of online session is presented.

  1. ITER solid breeder blanket materials database

    SciTech Connect

    Billone, M.C.; Dienst, W.; Flament, T.; Lorenzetto, P.; Noda, K.; Roux, N.

    1993-11-01

    The databases for solid breeder ceramics (Li{sub 2},O, Li{sub 4}SiO{sub 4}, Li{sub 2}ZrO{sub 3} and LiAlO{sub 2}) and beryllium multiplier material are critically reviewed and evaluated. Emphasis is placed on physical, thermal, mechanical, chemical stability/compatibility, tritium, and radiation stability properties which are needed to assess the performance of these materials in a fusion reactor environment. Correlations are selected for design analysis and compared to the database. Areas for future research and development in blanket materials technology are highlighted and prioritized.

  2. NASA aerospace database subject scope: An overview

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Outlined here is the subject scope of the NASA Aerospace Database, a publicly available subset of the NASA Scientific and Technical (STI) Database. Topics of interest to NASA are outlined and placed within the framework of the following broad aerospace subject categories: aeronautics, astronautics, chemistry and materials, engineering, geosciences, life sciences, mathematical and computer sciences, physics, social sciences, space sciences, and general. A brief discussion of the subject scope is given for each broad area, followed by a similar explanation of each of the narrower subject fields that follow. The subject category code is listed for each entry.

  3. Subject Retrieval from Full-Text Databases in the Humanities

    ERIC Educational Resources Information Center

    East, John W.

    2007-01-01

    This paper examines the problems involved in subject retrieval from full-text databases of secondary materials in the humanities. Ten such databases were studied and their search functionality evaluated, focusing on factors such as Boolean operators, document surrogates, limiting by subject area, proximity operators, phrase searching, wildcards,…

  4. Online Information. Selected Databases at the New York State Library.

    ERIC Educational Resources Information Center

    New York State Library, Albany. Database Services.

    This brochure describes the online information services at the New York State Library, which has online access to over 250 databases covering a broad range of subject areas, including current events, law, science, medicine, public affairs, grants, business, computer technology, education, social welfare, and humanities. Many of these databases are…

  5. ARTI Refrigerant Database

    SciTech Connect

    Calm, J.M.

    1992-11-09

    The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air- conditioning and refrigeration equipment. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R- 717 (ammonia), ethers, and others as well as azeotropic and zeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents on compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. A computerized version is available that includes retrieval software.

  6. Surgical research using national databases.

    PubMed

    Alluri, Ram K; Leland, Hyuma; Heckmann, Nathanael

    2016-10-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research.

  7. Surgical research using national databases

    PubMed Central

    Leland, Hyuma; Heckmann, Nathanael

    2016-01-01

    Recent changes in healthcare and advances in technology have increased the use of large-volume national databases in surgical research. These databases have been used to develop perioperative risk stratification tools, assess postoperative complications, calculate costs, and investigate numerous other topics across multiple surgical specialties. The results of these studies contain variable information but are subject to unique limitations. The use of large-volume national databases is increasing in popularity, and thorough understanding of these databases will allow for a more sophisticated and better educated interpretation of studies that utilize such databases. This review will highlight the composition, strengths, and weaknesses of commonly used national databases in surgical research. PMID:27867945

  8. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-07-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  9. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1999-01-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilities access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  10. COBE Astronomical Databases

    NASA Astrophysics Data System (ADS)

    Freedman, I.; Raugh, A. C.; Cheng, E. S.

    A project to store and convert external astronomical survey maps to the Cosmic Background Explorer (COBE) spacecraft pixelization is described. Established software is reused in order to reduce development costs. The proposed packages and systems include the Image Reduction and Analysis Facility (IRAF), Interactive Data Language Astronomy Library (IDL), the FITSIO data transfer package and the Astronomical Image Processing System (AIPS). The software structure of the astronomical databases, projected conversion schemes, quality assurance procedures and outstanding problems will be discussed.

  11. Developing customer databases.

    PubMed

    Rao, S K; Shenbaga, S

    2000-01-01

    There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.

  12. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1996-11-15

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern.

  13. The Danish Melanoma Database

    PubMed Central

    Hölmich, Lisbet Rosenkrantz; Klausen, Siri; Spaun, Eva; Schmidt, Grethe; Gad, Dorte; Svane, Inge Marie; Schmidt, Henrik; Lorentzen, Henrik Frank; Ibfelt, Else Helene

    2016-01-01

    Aim of database The aim of the database is to monitor and improve the treatment and survival of melanoma patients. Study population All Danish patients with cutaneous melanoma and in situ melanomas must be registered in the Danish Melanoma Database (DMD). In 2014, 2,525 patients with invasive melanoma and 780 with in situ tumors were registered. The coverage is currently 93% compared with the Danish Pathology Register. Main variables The main variables include demographic, clinical, and pathological characteristics, including Breslow’s tumor thickness, ± ulceration, mitoses, and tumor–node–metastasis stage. Information about the date of diagnosis, treatment, type of surgery, including safety margins, results of lymphoscintigraphy in patients for whom this was indicated (tumors > T1a), results of sentinel node biopsy, pathological evaluation hereof, and follow-up information, including recurrence, nature, and treatment hereof is registered. In case of death, the cause and date are included. Currently, all data are entered manually; however, data catchment from the existing registries is planned to be included shortly. Descriptive data The DMD is an old research database, but new as a clinical quality register. The coverage is high, and the performance in the five Danish regions is quite similar due to strong adherence to guidelines provided by the Danish Melanoma Group. The list of monitored indicators is constantly expanding, and annual quality reports are issued. Several important scientific studies are based on DMD data. Conclusion DMD holds unique detailed information about tumor characteristics, the surgical treatment, and follow-up of Danish melanoma patients. Registration and monitoring is currently expanding to encompass even more clinical parameters to benefit both patient treatment and research. PMID:27822097

  14. Electronic Journals as Databases

    NASA Astrophysics Data System (ADS)

    Holl, A.

    2004-07-01

    The Information Bulletin on Variable Stars is a bulletin fully available in electronic form. We are working on converting the text, tables and figures of the papers published into a database, and, at the same time, making them accessible and addressable. IBVS Data Service will provide information on variable stars --- like finding charts, light curves --- and will be VO compatible. Other services could link to individual figures, data files, etc. this way.

  15. Real Time Baseball Database

    NASA Astrophysics Data System (ADS)

    Fukue, Yasuhiro

    The author describes the system outline, features and operations of "Nikkan Sports Realtime Basaball Database" which was developed and operated by Nikkan Sports Shimbun, K. K. The system enables to input numerical data of professional baseball games as they proceed simultaneously, and execute data updating at realtime, just-in-time. Other than serving as supporting tool for prepareing newspapers it is also available for broadcasting media, general users through NTT dial Q2 and others.

  16. The Danish Sarcoma Database

    PubMed Central

    Jørgensen, Peter Holmberg; Lausten, Gunnar Schwarz; Pedersen, Alma B

    2016-01-01

    Aim The aim of the database is to gather information about sarcomas treated in Denmark in order to continuously monitor and improve the quality of sarcoma treatment in a local, a national, and an international perspective. Study population Patients in Denmark diagnosed with a sarcoma, both skeletal and ekstraskeletal, are to be registered since 2009. Main variables The database contains information about appearance of symptoms; date of receiving referral to a sarcoma center; date of first visit; whether surgery has been performed elsewhere before referral, diagnosis, and treatment; tumor characteristics such as location, size, malignancy grade, and growth pattern; details on treatment (kind of surgery, amount of radiation therapy, type and duration of chemotherapy); complications of treatment; local recurrence and metastases; and comorbidity. In addition, several quality indicators are registered in order to measure the quality of care provided by the hospitals and make comparisons between hospitals and with international standards. Descriptive data Demographic patient-specific data such as age, sex, region of living, comorbidity, World Health Organization’s International Classification of Diseases – tenth edition codes and TNM Classification of Malignant Tumours, and date of death (after yearly coupling to the Danish Civil Registration System). Data quality and completeness are currently secured. Conclusion The Danish Sarcoma Database is population based and includes sarcomas occurring in Denmark since 2009. It is a valuable tool for monitoring sarcoma incidence and quality of treatment and its improvement, postoperative complications, and recurrence within 5 years follow-up. The database is also a valuable research tool to study the impact of technical and medical interventions on prognosis of sarcoma patients. PMID:27822116

  17. Unified Database Development Program.

    DTIC Science & Technology

    1984-03-01

    unified database (UDB) program was to develop an automated system that would be useful to those responsible for the design , development, testing, and...weapon system design . Baekgound The Air Force is concerned with the lack of adequate logistics consideration during the weapon system design process. To...produce a weapon system with optimal cost and mission effectiveness, logistics factors must be considered very early and throughout the system design

  18. Naval sensor data database (NSDD)

    NASA Astrophysics Data System (ADS)

    Robertson, Candace J.; Tubridy, Lisa H.

    1999-08-01

    The Naval Sensor Data database (NSDD) is a multi-year effort to archive, catalogue, and disseminate data from all types of sensors to the mine warfare, signal and image processing, and sensor development communities. The purpose is to improve and accelerate research and technology. Providing performers with the data required to develop and validate improvements in hardware, simulation, and processing will foster advances in sensor and system performance. The NSDD will provide a centralized source of sensor data in its associated ground truth, which will support an improved understanding will be benefited in the areas of signal processing, computer-aided detection and classification, data compression, data fusion, and geo-referencing, as well as sensor and sensor system design.

  19. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1997-02-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufacturers and those using alterative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on various refrigerants. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  20. ARTI refrigerant database

    SciTech Connect

    Calm, J.M.

    1998-08-01

    The Refrigerant Database is an information system on alternative refrigerants, associated lubricants, and their use in air conditioning and refrigeration. It consolidates and facilitates access to property, compatibility, environmental, safety, application and other information. It provides corresponding information on older refrigerants, to assist manufactures and those using alternative refrigerants, to make comparisons and determine differences. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included, though some may be added at a later date. The database identifies sources of specific information on many refrigerants including propane, ammonia, water, carbon dioxide, propylene, ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, polyolester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents. They are included to accelerate availability of the information and will be completed or replaced in future updates.

  1. The Cambridge Structural Database.

    PubMed

    Groom, Colin R; Bruno, Ian J; Lightfoot, Matthew P; Ward, Suzanna C

    2016-04-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal-organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface.

  2. ARTI Refrigerant Database

    SciTech Connect

    Cain, J.M.

    1993-04-30

    The Refrigerant Database consolidates and facilitates access to information to assist industry in developing equipment using alternative refrigerants. The underlying purpose is to accelerate phase out of chemical compounds of environmental concern. The database provides bibliographic citations and abstracts for publications that may be useful in research and design of air-conditioning and refrigeration equipment. The complete documents are not included. The database identifies sources of specific information on R-32, R-123, R-124, R-125, R-134, R-134a, R-141b, R-142b, R-143a, R-152a, R-245ca, R-290 (propane), R-717 (ammonia), ethers, and others as well as azeotropic and zeotropic blends of these fluids. It addresses lubricants including alkylbenzene, polyalkylene glycol, ester, and other synthetics as well as mineral oils. It also references documents addressing compatibility of refrigerants and lubricants with metals, plastics, elastomers, motor insulation, and other materials used in refrigerant circuits. Incomplete citations or abstracts are provided for some documents to accelerate availability of the information and will be completed or replaced in future updates.

  3. State Analysis Database Tool

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert; Bennett, Matthew

    2006-01-01

    The State Analysis Database Tool software establishes a productive environment for collaboration among software and system engineers engaged in the development of complex interacting systems. The tool embodies State Analysis, a model-based system engineering methodology founded on a state-based control architecture (see figure). A state represents a momentary condition of an evolving system, and a model may describe how a state evolves and is affected by other states. The State Analysis methodology is a process for capturing system and software requirements in the form of explicit models and states, and defining goal-based operational plans consistent with the models. Requirements, models, and operational concerns have traditionally been documented in a variety of system engineering artifacts that address different aspects of a mission s lifecycle. In State Analysis, requirements, models, and operations information are State Analysis artifacts that are consistent and stored in a State Analysis Database. The tool includes a back-end database, a multi-platform front-end client, and Web-based administrative functions. The tool is structured to prompt an engineer to follow the State Analysis methodology, to encourage state discovery and model description, and to make software requirements and operations plans consistent with model descriptions.

  4. The Cambridge Structural Database

    PubMed Central

    Groom, Colin R.; Bruno, Ian J.; Lightfoot, Matthew P.; Ward, Suzanna C.

    2016-01-01

    The Cambridge Structural Database (CSD) contains a complete record of all published organic and metal–organic small-molecule crystal structures. The database has been in operation for over 50 years and continues to be the primary means of sharing structural chemistry data and knowledge across disciplines. As well as structures that are made public to support scientific articles, it includes many structures published directly as CSD Communications. All structures are processed both computationally and by expert structural chemistry editors prior to entering the database. A key component of this processing is the reliable association of the chemical identity of the structure studied with the experimental data. This important step helps ensure that data is widely discoverable and readily reusable. Content is further enriched through selective inclusion of additional experimental data. Entries are available to anyone through free CSD community web services. Linking services developed and maintained by the CCDC, combined with the use of standard identifiers, facilitate discovery from other resources. Data can also be accessed through CCDC and third party software applications and through an application programming interface. PMID:27048719

  5. Fiber pixelated image database

    NASA Astrophysics Data System (ADS)

    Shinde, Anant; Perinchery, Sandeep Menon; Matham, Murukeshan Vadakke

    2016-08-01

    Imaging of physically inaccessible parts of the body such as the colon at micron-level resolution is highly important in diagnostic medical imaging. Though flexible endoscopes based on the imaging fiber bundle are used for such diagnostic procedures, their inherent honeycomb-like structure creates fiber pixelation effects. This impedes the observer from perceiving the information from an image captured and hinders the direct use of image processing and machine intelligence techniques on the recorded signal. Significant efforts have been made by researchers in the recent past in the development and implementation of pixelation removal techniques. However, researchers have often used their own set of images without making source data available which subdued their usage and adaptability universally. A database of pixelated images is the current requirement to meet the growing diagnostic needs in the healthcare arena. An innovative fiber pixelated image database is presented, which consists of pixelated images that are synthetically generated and experimentally acquired. Sample space encompasses test patterns of different scales, sizes, and shapes. It is envisaged that this proposed database will alleviate the current limitations associated with relevant research and development and would be of great help for researchers working on comb structure removal algorithms.

  6. Generalized Database Management System Support for Numeric Database Environments.

    ERIC Educational Resources Information Center

    Dominick, Wayne D.; Weathers, Peggy G.

    1982-01-01

    This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…

  7. SmallSat Database

    NASA Technical Reports Server (NTRS)

    Petropulos, Dolores; Bittner, David; Murawski, Robert; Golden, Bert

    2015-01-01

    The SmallSat has an unrealized potential in both the private industry and in the federal government. Currently over 70 companies, 50 universities and 17 governmental agencies are involved in SmallSat research and development. In 1994, the U.S. Army Missile and Defense mapped the moon using smallSat imagery. Since then Smart Phones have introduced this imagery to the people of the world as diverse industries watched this trend. The deployment cost of smallSats is also greatly reduced compared to traditional satellites due to the fact that multiple units can be deployed in a single mission. Imaging payloads have become more sophisticated, smaller and lighter. In addition, the growth of small technology obtained from private industries has led to the more widespread use of smallSats. This includes greater revisit rates in imagery, significantly lower costs, the ability to update technology more frequently and the ability to decrease vulnerability of enemy attacks. The popularity of smallSats show a changing mentality in this fast paced world of tomorrow. What impact has this created on the NASA communication networks now and in future years? In this project, we are developing the SmallSat Relational Database which can support a simulation of smallSats within the NASA SCaN Compatability Environment for Networks and Integrated Communications (SCENIC) Modeling and Simulation Lab. The NASA Space Communications and Networks (SCaN) Program can use this modeling to project required network support needs in the next 10 to 15 years. The SmallSat Rational Database could model smallSats just as the other SCaN databases model the more traditional larger satellites, with a few exceptions. One being that the smallSat Database is designed to be built-to-order. The SmallSat database holds various hardware configurations that can be used to model a smallSat. It will require significant effort to develop as the research material can only be populated by hand to obtain the unique data

  8. Functional ceramic materials database: an online resource for materials research.

    PubMed

    Scott, D J; Manos, S; Coveney, P V; Rossiny, J C H; Fearn, S; Kilner, J A; Pullar, R C; Alford, N Mc N; Axelsson, A-K; Zhang, Y; Chen, L; Yang, S; Evans, J R G; Sebastian, M T

    2008-02-01

    We present work on the creation of a ceramic materials database which contains data gleaned from literature data sets as well as new data obtained from combinatorial experiments on the London University Search Instrument. At the time of this writing, the database contains data related to two main groups of materials, mainly in the perovskite family. Permittivity measurements of electroceramic materials are the first area of interest, while ion diffusion measurements of oxygen ion conductors are the second. The nature of the database design does not restrict the type of measurements which can be stored; as the available data increase, the database may become a generic, publicly available ceramic materials resource.

  9. High Temperature Superconducting Materials Database

    National Institute of Standards and Technology Data Gateway

    SRD 149 NIST High Temperature Superconducting Materials Database (Web, free access)   The NIST High Temperature Superconducting Materials Database (WebHTS) provides evaluated thermal, mechanical, and superconducting property data for oxides and other nonconventional superconductors.

  10. Mobile Source Observation Database (MSOD)

    EPA Pesticide Factsheets

    The Mobile Source Observation Database (MSOD) is a relational database developed by the Assessment and Standards Division (ASD) of the U.S. EPA Office of Transportation and Air Quality (formerly the Office of Mobile Sources).

  11. A Case for Database Filesystems

    SciTech Connect

    Adams, P A; Hax, J C

    2009-05-13

    Data intensive science is offering new challenges and opportunities for Information Technology and traditional relational databases in particular. Database filesystems offer the potential to store Level Zero data and analyze Level 1 and Level 3 data within the same database system [2]. Scientific data is typically composed of both unstructured files and scalar data. Oracle SecureFiles is a new database filesystem feature in Oracle Database 11g that is specifically engineered to deliver high performance and scalability for storing unstructured or file data inside the Oracle database. SecureFiles presents the best of both the filesystem and the database worlds for unstructured content. Data stored inside SecureFiles can be queried or written at performance levels comparable to that of traditional filesystems while retaining the advantages of the Oracle database.

  12. ThermoData Engine Database

    National Institute of Standards and Technology Data Gateway

    SRD 103 NIST ThermoData Engine Database (PC database for purchase)   ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.

  13. The Reach Address Database (RAD)

    EPA Pesticide Factsheets

    The Reach Address Database (RAD) stores reach address information for each Water Program feature that has been linked to the underlying surface water features (streams, lakes, etc) in the National Hydrology Database (NHD) Plus dataset.

  14. Hydrogen Leak Detection Sensor Database

    NASA Technical Reports Server (NTRS)

    Baker, Barton D.

    2010-01-01

    This slide presentation reviews the characteristics of the Hydrogen Sensor database. The database is the result of NASA's continuing interest in and improvement of its ability to detect and assess gas leaks in space applications. The database specifics and a snapshot of an entry in the database are reviewed. Attempts were made to determine the applicability of each of the 65 sensors for ground and/or vehicle use.

  15. A Forest Vegetation Database for Western Oregon

    USGS Publications Warehouse

    Busing, Richard T.

    2004-01-01

    Data on forest vegetation in western Oregon were assembled for 2323 ecological survey plots. All data were from fixed-radius plots with the standardized design of the Current Vegetation Survey (CVS) initiated in the early 1990s. For each site, the database includes: 1) live tree density and basal area of common tree species, 2) total live tree density, basal area, estimated biomass, and estimated leaf area; 3) age of the oldest overstory tree examined, 4) geographic coordinates, 5) elevation, 6) interpolated climate variables, and 7) other site variables. The data are ideal for ecoregional analyses of existing vegetation.

  16. GMDD: a database of GMO detection methods

    PubMed Central

    Dong, Wei; Yang, Litao; Shen, Kailin; Kim, Banghyun; Kleter, Gijs A; Marvin, Hans JP; Guo, Rong; Liang, Wanqi; Zhang, Dabing

    2008-01-01

    Background Since more than one hundred events of genetically modified organisms (GMOs) have been developed and approved for commercialization in global area, the GMO analysis methods are essential for the enforcement of GMO labelling regulations. Protein and nucleic acid-based detection techniques have been developed and utilized for GMOs identification and quantification. However, the information for harmonization and standardization of GMO analysis methods at global level is needed. Results GMO Detection method Database (GMDD) has collected almost all the previous developed and reported GMOs detection methods, which have been grouped by different strategies (screen-, gene-, construct-, and event-specific), and also provide a user-friendly search service of the detection methods by GMO event name, exogenous gene, or protein information, etc. In this database, users can obtain the sequences of exogenous integration, which will facilitate PCR primers and probes design. Also the information on endogenous genes, certified reference materials, reference molecules, and the validation status of developed methods is included in this database. Furthermore, registered users can also submit new detection methods and sequences to this database, and the newly submitted information will be released soon after being checked. Conclusion GMDD contains comprehensive information of GMO detection methods. The database will make the GMOs analysis much easier. PMID:18522755

  17. Microbial Properties Database Editor Tutorial

    EPA Science Inventory

    A Microbial Properties Database Editor (MPDBE) has been developed to help consolidate microbial-relevant data to populate a microbial database and support a database editor by which an authorized user can modify physico-microbial properties related to microbial indicators and pat...

  18. Scientific and Technical Document Database

    National Institute of Standards and Technology Data Gateway

    NIST Scientific and Technical Document Database (PC database for purchase)   The images in NIST Special Database 20 contain a very rich set of graphic elements from scientific and technical documents, such as graphs, tables, equations, two column text, maps, pictures, footnotes, annotations, and arrays of such elements.

  19. Choosing among the physician databases.

    PubMed

    Heller, R H

    1988-04-01

    Prudent examination and knowing how to ask the "right questions" can enable hospital marketers and planners to find the most accurate and appropriate database. The author compares the comprehensive AMA physician database with the less expensive MEDEC database to determine their strengths and weaknesses.

  20. A Spanish American War Database.

    ERIC Educational Resources Information Center

    Hands, Edmund

    1992-01-01

    Discusses a database used by honors high school U.S. history students learning about the Spanish-American War. Reports that the students compiled the database. Includes some of the historical background of the war, questions for study, a database key, and a table showing U.S. senators' votes relating to the War. (SG)

  1. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as

  2. High-integrity databases for helicopter operations

    NASA Astrophysics Data System (ADS)

    Pschierer, Christian; Schiefele, Jens; Lüthy, Juerg

    2009-05-01

    Helicopter Emergency Medical Service missions (HEMS) impose a high workload on pilots due to short preparation time, operations in low level flight, and landings in unknown areas. The research project PILAS, a cooperation between Eurocopter, Diehl Avionics, DLR, EADS, Euro Telematik, ESG, Jeppesen, the Universities of Darmstadt and Munich, and funded by the German government, approached this problem by researching a pilot assistance system which supports the pilots during all phases of flight. The databases required for the specified helicopter missions include different types of topological and cultural data for graphical display on the SVS system, AMDB data for operations at airports and helipads, and navigation data for IFR segments. The most critical databases for the PILAS system however are highly accurate terrain and obstacle data. While RTCA DO-276 specifies high accuracies and integrities only for the areas around airports, HEMS helicopters typically operate outside of these controlled areas and thus require highly reliable terrain and obstacle data for their designated response areas. This data has been generated by a LIDAR scan of the specified test region. Obstacles have been extracted into a vector format. This paper includes a short overview of the complete PILAS system and then focus on the generation of the required high quality databases.

  3. DOE technology information management system database study report

    SciTech Connect

    Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L.

    1994-11-01

    To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.

  4. Construction of file database management

    SciTech Connect

    MERRILL,KYLE J.

    2000-03-01

    This work created a database for tracking data analysis files from multiple lab techniques and equipment stored on a central file server. Experimental details appropriate for each file type are pulled from the file header and stored in a searchable database. The database also stores specific location and self-directory structure for each data file. Queries can be run on the database according to file type, sample type or other experimental parameters. The database was constructed in Microsoft Access and Visual Basic was used for extraction of information from the file header.

  5. Databases as an information service

    NASA Technical Reports Server (NTRS)

    Vincent, D. A.

    1983-01-01

    The relationship of databases to information services, and the range of information services users and their needs for information is explored and discussed. It is argued that for database information to be valuable to a broad range of users, it is essential that access methods be provided that are relatively unstructured and natural to information services users who are interested in the information contained in databases, but who are not willing to learn and use traditional structured query languages. Unless this ease of use of databases is considered in the design and application process, the potential benefits from using database systems may not be realized.

  6. NLCD 2011 database

    EPA Pesticide Factsheets

    National Land Cover Database 2011 (NLCD 2011) is the most recent national land cover product created by the Multi-Resolution Land Characteristics (MRLC) Consortium. NLCD 2011 provides - for the first time - the capability to assess wall-to-wall, spatially explicit, national land cover changes and trends across the United States from 2001 to 2011. As with two previous NLCD land cover products NLCD 2011 keeps the same 16-class land cover classification scheme that has been applied consistently across the United States at a spatial resolution of 30 meters. NLCD 2011 is based primarily on a decision-tree classification of circa 2011 Landsat satellite data. This dataset is associated with the following publication:Homer, C., J. Dewitz, L. Yang, S. Jin, P. Danielson, G. Xian, J. Coulston, N. Herold, J. Wickham , and K. Megown. Completion of the 2011 National Land Cover Database for the Conterminous United States – Representing a Decade of Land Cover Change Information. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING. American Society for Photogrammetry and Remote Sensing, Bethesda, MD, USA, 81(0): 345-354, (2015).

  7. Asbestos Exposure Assessment Database

    NASA Technical Reports Server (NTRS)

    Arcot, Divya K.

    2010-01-01

    Exposure to particular hazardous materials in a work environment is dangerous to the employees who work directly with or around the materials as well as those who come in contact with them indirectly. In order to maintain a national standard for safe working environments and protect worker health, the Occupational Safety and Health Administration (OSHA) has set forth numerous precautionary regulations. NASA has been proactive in adhering to these regulations by implementing standards which are often stricter than regulation limits and administering frequent health risk assessments. The primary objective of this project is to create the infrastructure for an Asbestos Exposure Assessment Database specific to NASA Johnson Space Center (JSC) which will compile all of the exposure assessment data into a well-organized, navigable format. The data includes Sample Types, Samples Durations, Crafts of those from whom samples were collected, Job Performance Requirements (JPR) numbers, Phased Contrast Microscopy (PCM) and Transmission Electron Microscopy (TEM) results and qualifiers, Personal Protective Equipment (PPE), and names of industrial hygienists who performed the monitoring. This database will allow NASA to provide OSHA with specific information demonstrating that JSC s work procedures are protective enough to minimize the risk of future disease from the exposures. The data has been collected by the NASA contractors Computer Sciences Corporation (CSC) and Wyle Laboratories. The personal exposure samples were collected from devices worn by laborers working at JSC and by building occupants located in asbestos-containing buildings.

  8. The ITPA disruption database

    NASA Astrophysics Data System (ADS)

    Eidietis, N. W.; Gerhardt, S. P.; Granetz, R. S.; Kawano, Y.; Lehnen, M.; Lister, J. B.; Pautasso, G.; Riccardo, V.; Tanna, R. L.; Thornton, A. J.; ITPA Disruption Database Participants, The

    2015-06-01

    A multi-device database of disruption characteristics has been developed under the auspices of the International Tokamak Physics Activity magneto-hydrodynamics topical group. The purpose of this ITPA disruption database (IDDB) is to find the commonalities between the disruption and disruption mitigation characteristics in a wide variety of tokamaks in order to elucidate the physics underlying tokamak disruptions and to extrapolate toward much larger devices, such as ITER and future burning plasma devices. In contrast to previous smaller disruption data collation efforts, the IDDB aims to provide significant context for each shot provided, allowing exploration of a wide array of relationships between pre-disruption and disruption parameters. The IDDB presently includes contributions from nine tokamaks, including both conventional aspect ratio and spherical tokamaks. An initial parametric analysis of the available data is presented. This analysis includes current quench rates, halo current fraction and peaking, and the effectiveness of massive impurity injection. The IDDB is publicly available, with instruction for access provided herein.

  9. The TIGR Maize Database.

    PubMed

    Chan, Agnes P; Pertea, Geo; Cheung, Foo; Lee, Dan; Zheng, Li; Whitelaw, Cathy; Pontaroli, Ana C; SanMiguel, Phillip; Yuan, Yinan; Bennetzen, Jeffrey; Barbazuk, William Brad; Quackenbush, John; Rabinowicz, Pablo D

    2006-01-01

    Maize is a staple crop of the grass family and also an excellent model for plant genetics. Owing to the large size and repetitiveness of its genome, we previously investigated two approaches to accelerate gene discovery and genome analysis in maize: methylation filtration and high C(0)t selection. These techniques allow the construction of gene-enriched genomic libraries by minimizing repeat sequences due to either their methylation status or their copy number, yielding a 7-fold enrichment in genic sequences relative to a random genomic library. Approximately 900,000 gene-enriched reads from maize were generated and clustered into Assembled Zea mays (AZM) sequences. Here we report the current AZM release, which consists of approximately 298 Mb representing 243,807 sequence assemblies and singletons. In order to provide a repository of publicly available maize genomic sequences, we have created the TIGR Maize Database (http://maize.tigr.org). In this resource, we have assembled and annotated the AZMs and used available sequenced markers to anchor AZMs to maize chromosomes. We have constructed a maize repeat database and generated draft sequence assemblies of 287 maize bacterial artificial chromosome (BAC) clone sequences, which we annotated along with 172 additional publicly available BAC clones. All sequences, assemblies and annotations are available at the project website via web interfaces and FTP downloads.

  10. IPD: the Immuno Polymorphism Database.

    PubMed

    Robinson, James; Marsh, Steven G E

    2007-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer cell immunoglobulin-like receptors (KIRs); IPD-MHC, a database of sequences of the major histocompatibility complex (MHC) of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTAB, which provides access to the European Searchable Tumour Cell Line Database, a cell bank of immunologically characterized melanoma cell lines. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. Those sections with similar data, such as IPD-KIR and IPD-MHC, share the same database structure.

  11. National Geochronological Database

    USGS Publications Warehouse

    Revised by Sloan, Jan; Henry, Christopher D.; Hopkins, Melanie; Ludington, Steve; Original database by Zartman, Robert E.; Bush, Charles A.; Abston, Carl

    2003-01-01

    The National Geochronological Data Base (NGDB) was established by the United States Geological Survey (USGS) to collect and organize published isotopic (also known as radiometric) ages of rocks in the United States. The NGDB (originally known as the Radioactive Age Data Base, RADB) was started in 1974. A committee appointed by the Director of the USGS was given the mission to investigate the feasibility of compiling the published radiometric ages for the United States into a computerized data bank for ready access by the user community. A successful pilot program, which was conducted in 1975 and 1976 for the State of Wyoming, led to a decision to proceed with the compilation of the entire United States. For each dated rock sample reported in published literature, a record containing information on sample location, rock description, analytical data, age, interpretation, and literature citation was constructed and included in the NGDB. The NGDB was originally constructed and maintained on a mainframe computer, and later converted to a Helix Express relational database maintained on an Apple Macintosh desktop computer. The NGDB and a program to search the data files were published and distributed on Compact Disc-Read Only Memory (CD-ROM) in standard ISO 9660 format as USGS Digital Data Series DDS-14 (Zartman and others, 1995). As of May 1994, the NGDB consisted of more than 18,000 records containing over 30,000 individual ages, which is believed to represent approximately one-half the number of ages published for the United States through 1991. Because the organizational unit responsible for maintaining the database was abolished in 1996, and because we wanted to provide the data in more usable formats, we have reformatted the data, checked and edited the information in some records, and provided this online version of the NGDB. This report describes the changes made to the data and formats, and provides instructions for the use of the database in geographic

  12. A Scalable Database Infrastructure

    NASA Astrophysics Data System (ADS)

    Arko, R. A.; Chayes, D. N.

    2001-12-01

    The rapidly increasing volume and complexity of MG&G data, and the growing demand from funding agencies and the user community that it be easily accessible, demand that we improve our approach to data management in order to reach a broader user-base and operate more efficient and effectively. We have chosen an approach based on industry-standard relational database management systems (RDBMS) that use community-wide data specifications, where there is a clear and well-documented external interface that allows use of general purpose as well as customized clients. Rapid prototypes assembled with this approach show significant advantages over the traditional, custom-built data management systems that often use "in-house" legacy file formats, data specifications, and access tools. We have developed an effective database prototype based a public domain RDBMS (PostgreSQL) and metadata standard (FGDC), and used it as a template for several ongoing MG&G database management projects - including ADGRAV (Antarctic Digital Gravity Synthesis), MARGINS, the Community Review system of the Digital Library for Earth Science Education, multibeam swath bathymetry metadata, and the R/V Maurice Ewing onboard acquisition system. By using standard formats and specifications, and working from a common prototype, we are able to reuse code and deploy rapidly. Rather than spend time on low-level details such as storage and indexing (which are built into the RDBMS), we can focus on high-level details such as documentation and quality control. In addition, because many commercial off-the-shelf (COTS) and public domain data browsers and visualization tools have built-in RDBMS support, we can focus on backend development and leave the choice of a frontend client(s) up to the end user. While our prototype is running under an open source RDBMS on a single processor host, the choice of standard components allows this implementation to scale to commercial RDBMS products and multiprocessor servers as

  13. Searching gene and protein sequence databases.

    PubMed

    Barsalou, T; Brutlag, D L

    1991-01-01

    A large-scale effort to map and sequence the human genome is now under way. Crucial to the success of this research is a group of computer programs that analyze and compare data on molecular sequences. This article describes the classic algorithms for similarity searching and sequence alignment. Because good performance of these algorithms is critical to searching very large and growing databases, we analyze the running times of the algorithms and discuss recent improvements in this area.

  14. Instruction manual for the Wahoo computerized database

    SciTech Connect

    Lasota, D.; Watts, K.

    1995-05-01

    As part of our research on the Lisburne Group, we have developed a powerful relational computerized database to accommodate the huge amounts of data generated by our multi-disciplinary research project. The Wahoo database has data files on petrographic data, conodont analyses, locality and sample data, well logs and diagenetic (cement) studies. Chapter 5 is essentially an instruction manual that summarizes some of the unique attributes and operating procedures of the Wahoo database. The main purpose of a database is to allow users to manipulate their data and produce reports and graphs for presentation. We present a variety of data tables in appendices at the end of this report, each encapsulating a small part of the data contained in the Wahoo database. All the data are sorted and listed by map index number and stratigraphic position (depth). The Locality data table (Appendix A) lists of the stratigraphic sections examined in our study. It gives names of study areas, stratigraphic units studied, locality information, and researchers. Most localities are keyed to a geologic map that shows the distribution of the Lisburne Group and location of our sections in ANWR. Petrographic reports (Appendix B) are detailed summaries of data the composition and texture of the Lisburne Group carbonates. The relative abundance of different carbonate grains (allochems) and carbonate texture are listed using symbols that portray data in a format similar to stratigraphic columns. This enables researchers to recognize trends in the evolution of the Lisburne carbonate platform and to check their paleoenvironmental interpretations in a stratigraphic context. Some of the figures in Chapter 1 were made using the Wahoo database.

  15. PPD - Proteome Profile Database.

    PubMed

    Sakharkar, Kishore R; Chow, Vincent T K

    2004-01-01

    With the complete sequencing of multiple genomes, there have been extensions in the methods of sequence analysis from single gene/protein-based to analyzing multiple genes and proteins simultaneously. Therefore, there is a demand of user-friendly software tools that will allow mining of these enormous datasets. PPD is a WWW-based database for comparative analysis of protein lengths in completely sequenced prokaryotic and eukaryotic genomes. PPD's core objective is to create protein classification tables based on the lengths of proteins by specifying a set of organisms and parameters. The interface can also generate information on changes in proteins of specific length distributions. This feature is of importance when the user's interest is focused on some evolutionarily related organisms or on organisms with similar or related tissue specificity or life-style. PPD is available at: PPD Home.

  16. View generated database

    NASA Technical Reports Server (NTRS)

    Downward, James G.

    1992-01-01

    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics.

  17. Ribosomal Database Project II

    DOE Data Explorer

    The Ribosomal Database Project (RDP) provides ribosome related data and services to the scientific community, including online data analysis and aligned and annotated Bacterial small-subunit 16S rRNA sequences. As of March 2008, RDP Release 10 is available and currently (August 2009) contains 1,074,075 aligned 16S rRNA sequences. Data that can be downloaded include zipped GenBank and FASTA alignment files, a histogram (in Excel) of the number of RDP sequences spanning each base position, data in the Functional Gene Pipeline Repository, and various user submitted data. The RDP-II website also provides numerous analysis tools.[From the RDP-II home page at http://rdp.cme.msu.edu/index.jsp

  18. Danish Palliative Care Database

    PubMed Central

    Groenvold, Mogens; Adsersen, Mathilde; Hansen, Maiken Bang

    2016-01-01

    Aims The aim of the Danish Palliative Care Database (DPD) is to monitor, evaluate, and improve the clinical quality of specialized palliative care (SPC) (ie, the activity of hospital-based palliative care teams/departments and hospices) in Denmark. Study population The study population is all patients in Denmark referred to and/or in contact with SPC after January 1, 2010. Main variables The main variables in DPD are data about referral for patients admitted and not admitted to SPC, type of the first SPC contact, clinical and sociodemographic factors, multidisciplinary conference, and the patient-reported European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care questionnaire, assessing health-related quality of life. The data support the estimation of currently five quality of care indicators, ie, the proportions of 1) referred and eligible patients who were actually admitted to SPC, 2) patients who waited <10 days before admission to SPC, 3) patients who died from cancer and who obtained contact with SPC, 4) patients who were screened with European Organisation for Research and Treatment of Cancer Quality of Life Questionaire-Core-15-Palliative Care at admission to SPC, and 5) patients who were discussed at a multidisciplinary conference. Descriptive data In 2014, all 43 SPC units in Denmark reported their data to DPD, and all 9,434 cancer patients (100%) referred to SPC were registered in DPD. In total, 41,104 unique cancer patients were registered in DPD during the 5 years 2010–2014. Of those registered, 96% had cancer. Conclusion DPD is a national clinical quality database for SPC having clinically relevant variables and high data and patient completeness. PMID:27822111

  19. MetaBase—the wiki-database of biological databases

    PubMed Central

    Bolser, Dan M.; Chibon, Pierre-Yves; Palopoli, Nicolas; Gong, Sungsam; Jacob, Daniel; Angel, Victoria Dominguez Del; Swan, Dan; Bassi, Sebastian; González, Virginia; Suravajhala, Prashanth; Hwang, Seungwoo; Romano, Paolo; Edwards, Rob; Bishop, Bryan; Eargle, John; Shtatland, Timur; Provart, Nicholas J.; Clements, Dave; Renfro, Daniel P.; Bhak, Daeui; Bhak, Jong

    2012-01-01

    Biology is generating more data than ever. As a result, there is an ever increasing number of publicly available databases that analyse, integrate and summarize the available data, providing an invaluable resource for the biological community. As this trend continues, there is a pressing need to organize, catalogue and rate these resources, so that the information they contain can be most effectively exploited. MetaBase (MB) (http://MetaDatabase.Org) is a community-curated database containing more than 2000 commonly used biological databases. Each entry is structured using templates and can carry various user comments and annotations. Entries can be searched, listed, browsed or queried. The database was created using the same MediaWiki technology that powers Wikipedia, allowing users to contribute on many different levels. The initial release of MB was derived from the content of the 2007 Nucleic Acids Research (NAR) Database Issue. Since then, approximately 100 databases have been manually collected from the literature, and users have added information for over 240 databases. MB is synchronized annually with the static Molecular Biology Database Collection provided by NAR. To date, there have been 19 significant contributors to the project; each one is listed as an author here to highlight the community aspect of the project. PMID:22139927

  20. ATLAS database application enhancements using Oracle 11g

    NASA Astrophysics Data System (ADS)

    Dimitrov, G.; Canali, L.; Blaszczyk, M.; Sorokoletov, R.

    2012-12-01

    The ATLAS experiment at LHC relies on databases for detector online data-taking, storage and retrieval of configurations, calibrations and alignments, post data-taking analysis, file management over the grid, job submission and management, condition data replication to remote sites. Oracle Relational Database Management System (RDBMS) has been addressing the ATLAS database requirements to a great extent for many years. Ten database clusters are currently deployed for the needs of the different applications, divided in production, integration and standby databases. The data volume, complexity and demands from the users are increasing steadily with time. Nowadays more than 20 TB of data are stored in the ATLAS production Oracle databases at CERN (not including the index overhead), but the most impressive number is the hosted 260 database schemes (for the most common case each schema is related to a dedicated client application with its own requirements). At the beginning of 2012 all ATLAS databases at CERN have been upgraded to the newest Oracle version at the time: Oracle 11g Release 2. Oracle 11g come with several key improvements compared to previous database engine versions. In this work we present our evaluation of the most relevant new features of Oracle 11g of interest for ATLAS applications and use cases. Notably we report on the performance and scalability enhancements obtained in production since the Oracle 11g deployment during Q1 2012 and we outline plans for future work in this area.

  1. Enhanced Worldwide Ocean Optics Database

    DTIC Science & Technology

    2008-01-01

    Ryukyu Ridge 9 salinity, temperature, c532, "K" & bb from aBeta, Kd488, and Chl_a profiles Sept 1987 NORDA Sargasso Sea 13 K490 & Temperature...Optics Database (WOOD)1. The database shall be easy to use, Internet accessible, and frequently updated with data from recent at- sea measurements...The database shall be capable of supporting a wide range of applications, such as environmental assessments, sea test planning, and Navy applications

  2. Inorganic Crystal Structure Database (ICSD)

    National Institute of Standards and Technology Data Gateway

    SRD 84 FIZ/NIST Inorganic Crystal Structure Database (ICSD) (PC database for purchase)   The Inorganic Crystal Structure Database (ICSD) is produced cooperatively by the Fachinformationszentrum Karlsruhe(FIZ) and the National Institute of Standards and Technology (NIST). The ICSD is a comprehensive collection of crystal structure data of inorganic compounds containing more than 140,000 entries and covering the literature from 1915 to the present.

  3. Generative engineering databases - Toward expert systems

    NASA Technical Reports Server (NTRS)

    Rasdorf, W. J.; Salley, G. C.

    1985-01-01

    Engineering data management, incorporating concepts of optimization with data representation, is receiving increasing attention as the amount and complexity of information necessary for performing engineering operations increases and the need to coordinate its representation and use increases. Research in this area promises advantages for a wide variety of engineering applications, particularly those which seek to use data in innovative ways in the engineering process. This paper presents a framework for a comprehensive, relational database management system that combines a knowledge base of design constraints with a database of engineering data items in order to achieve a 'generative database' - one which automatically generates new engineering design data according to the design constraints stored in the knowledge base. The representation requires a database that is able to store all of the data normally associated with engineering design and to accurately represent the interactions between constraints and the stored data while guaranteeing its integrity. The representation also requires a knowledge base that is able to store all the constraints imposed upon the engineering design process.

  4. BGDB: a database of bivalent genes.

    PubMed

    Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua

    2013-01-01

    Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/

  5. The 3XMM spectral fit database

    NASA Astrophysics Data System (ADS)

    Georgantopoulos, I.; Corral, A.; Watson, M.; Carrera, F.; Webb, N.; Rosen, S.

    2016-06-01

    I will present the XMMFITCAT database which is a spectral fit inventory of the sources in the 3XMM catalogue. Spectra are available by the XMM/SSC for all 3XMM sources which have more than 50 background subtracted counts per module. This work is funded in the framework of the ESA Prodex project. The 3XMM catalog currently covers 877 sq. degrees and contains about 400,000 unique sources. Spectra are available for over 120,000 sources. Spectral fist have been performed with various spectral models. The results are available in the web page http://xraygroup.astro.noa.gr/ and also at the University of Leicester LEDAS database webpage ledas-www.star.le.ac.uk/. The database description as well as some science results in the joint area with SDSS are presented in two recent papers: Corral et al. 2015, A&A, 576, 61 and Corral et al. 2014, A&A, 569, 71. At least for extragalactic sources, the spectral fits will acquire added value when photometric redshifts become available. In the framework of a new Prodex project we have been funded to derive photometric redshifts for the 3XMM sources using machine learning techniques. I will present the techniques as well as the optical near-IR databases that will be used.

  6. Relativistic quantum private database queries

    NASA Astrophysics Data System (ADS)

    Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou

    2015-04-01

    Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.

  7. A Database for Propagation Models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Rucker, James

    1997-01-01

    The Propagation Models Database is designed to allow the scientists and experimenters in the propagation field to process their data through many known and accepted propagation models. The database is an Excel 5.0 based software that houses user-callable propagation models of propagation phenomena. It does not contain a database of propagation data generated out of the experiments. The database not only provides a powerful software tool to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables and producing impressive and crisp hard copy for presentation and filing.

  8. Speech Databases of Typical Children and Children with SLI.

    PubMed

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children's speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children's speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children's speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing.

  9. The BioImage Database Project: organizing multidimensional biological images in an object-relational database.

    PubMed

    Carazo, J M; Stelzer, E H

    1999-01-01

    The BioImage Database Project collects and structures multidimensional data sets recorded by various microscopic techniques relevant to modern life sciences. It provides, as precisely as possible, the circumstances in which the sample was prepared and the data were recorded. It grants access to the actual data and maintains links between related data sets. In order to promote the interdisciplinary approach of modern science, it offers a large set of key words, which covers essentially all aspects of microscopy. Nonspecialists can, therefore, access and retrieve significant information recorded and submitted by specialists in other areas. A key issue of the undertaking is to exploit the available technology and to provide a well-defined yet flexible structure for dealing with data. Its pivotal element is, therefore, a modern object relational database that structures the metadata and ameliorates the provision of a complete service. The BioImage database can be accessed through the Internet.

  10. Public chemical compound databases.

    PubMed

    Williams, Anthony J

    2008-05-01

    The internet has rapidly become the first port of call for all information searches. The increasing array of chemistry-related resources that are now available provides chemists with a direct path to the information that was previously accessed via library services and was limited by commercial and costly resources. The diversity of the information that can be accessed online is expanding at a dramatic rate, and the support for publicly available resources offers significant opportunities in terms of the benefits to science and society. While the data online do not generally meet the quality standards of manually curated sources, there are efforts underway to gather scientists together and 'crowdsource' an improvement in the quality of the available data. This review discusses the types of public compound databases that are available online and provides a series of examples. Focus is also given to the benefits and disruptions associated with the increased availability of such data and the integration of technologies to data mine this information.

  11. The Danish National Quality Database for Births

    PubMed Central

    Andersson, Charlotte Brix; Flems, Christina; Kesmodel, Ulrik Schiøler

    2016-01-01

    Aim of the database The aim of the Danish National Quality Database for Births (DNQDB) is to measure the quality of the care provided during birth through specific indicators. Study population The database includes all hospital births in Denmark. Main variables Anesthesia/pain relief, continuous support for women in the delivery room, lacerations (third and fourth degree), cesarean section, postpartum hemorrhage, establishment of skin-to-skin contact between the mother and the newborn infant, severe fetal hypoxia (proportion of live-born children with neonatal hypoxia), delivery of a healthy child after an uncomplicated birth, and anesthesia in case of cesarean section. Descriptive data Data have been collected since 2010. As of August 2015, data on women and children representing 269,597 births and 274,153 children have been collected. All data for the DNQDB is collected from the Danish Medical Birth Registry. Registration to the Danish Medical Birth Registry is mandatory for all maternity units in Denmark. During the 5 years, performance has improved in the areas covered by the process indicators and for some of the outcome indicators. Conclusion Measuring quality of care during childbirth has inspired and enabled staff to attend to the quality of the care they provide and has led to improvements in most of the areas covered. PMID:27822105

  12. Content Independence in Multimedia Databases.

    ERIC Educational Resources Information Center

    de Vries, Arjen P.

    2001-01-01

    Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…

  13. Hanford Site technical baseline database

    SciTech Connect

    Porter, P.E., Westinghouse Hanford

    1996-05-10

    This document includes a cassette tape that contains the Hanford specific files that make up the Hanford Site Technical Baseline Database as of May 10, 1996. The cassette tape also includes the delta files that delineate the differences between this revision and revision 3 (April 10, 1996) of the Hanford Site Technical Baseline Database.

  14. XCOM: Photon Cross Sections Database

    National Institute of Standards and Technology Data Gateway

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  15. Atomic Spectroscopic Databases at NIST

    NASA Technical Reports Server (NTRS)

    Reader, J.; Kramida, A. E.; Ralchenko, Yu.

    2006-01-01

    We describe recent work at NIST to develop and maintain databases for spectra, transition probabilities, and energy levels of atoms that are astrophysically important. Our programs to critically compile these data as well as to develop a new database to compare plasma calculations for atoms that are not in local thermodynamic equilibrium are also summarized.

  16. The Student-Designed Database.

    ERIC Educational Resources Information Center

    Thomas, Rick

    1988-01-01

    This discussion of the design of data files for databases to be created by secondary school students uses AppleWorks software as an example. Steps needed to create and use a database are explained, the benefits of group activity are described, and other possible projects are listed. (LRW)

  17. Data manipulation in heterogeneous databases

    SciTech Connect

    Chatterjee, A.; Segev, A.

    1991-10-01

    Many important information systems applications require access to data stored in multiple heterogeneous databases. This paper examines a problem in inter-database data manipulation within a heterogeneous environment, where conventional techniques are no longer useful. To solve the problem, a broader definition for join operator is proposed. Also, a method to probabilistically estimate the accuracy of the join is discussed.

  18. Database Licensing: A Future View.

    ERIC Educational Resources Information Center

    Flanagan, Michael

    1993-01-01

    Access to database information in libraries will increase as licenses for tape loading of data onto public access catalogs becomes more widespread. Institutions with adequate storage capacity will have full text databases, and the adoption of the Z39.50 standard, which allows differing computer systems to interface with each other, will increase…

  19. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  20. The EUVE satellite survey database

    NASA Technical Reports Server (NTRS)

    Craig, N.; Chen, T.; Hawkins, I.; Fruscione, A.

    1993-01-01

    The EUVE survey database contains fundamental science data for 9000 potential source locations (pigeonholes) in the sky. The first release of the Bright Source List is now available to the public through an interface with the NASA Astrophysical Data System. We describe the database schema design and the EUVE source categorization algorithm that compares sources to the ROSAT Wide Field Camera source list.

  1. The Yield of Bibliographic Databases.

    ERIC Educational Resources Information Center

    Kowalski, Kazimierz; Hackett, Timothy P.

    1992-01-01

    Demonstrates a means for estimating the number of retrieved items using well-established selective dissemination of information (SDI) profiles in the SCI, INSPEC, ISMEC, CAS, and PASCAL databases. A correlation between individual database size and number of retrieved documents in technical fields is also examined. (17 references) (LAE)

  2. GOTTCHA Database, Version 1

    SciTech Connect

    Freitas, Tracey; Chain, Patrick; Lo, Chien-Chi; Li, Po-E

    2015-08-03

    One major challenge in the field of shotgun metagenomics is the accurate identification of the organisms present within the community, based on classification of short sequence reads. Though microbial community profiling methods have emerged to attempt to rapidly classify the millions of reads output from contemporary sequencers, the combination of incomplete databases, similarity among otherwise divergent genomes, and the large volumes of sequencing data required for metagenome sequencing has led to unacceptably high false discovery rates (FDR). Here we present the application of a novel, gene-independent and signature-based metagenomic taxonomic profiling tool with significantly smaller FDR, which is also capable of classifying never-before seen genomes into the appropriate parent taxa.The algorithm is based upon three primary computational phases: (I) genomic decomposition into bit vectors, (II) bit vector intersections to identify shared regions, and (III) bit vector subtractions to remove shared regions and reveal unique, signature regions.In the Decomposition phase, genomic data is first masked to highlight only the valid (non-ambiguous) regions and then decomposed into overlapping 24-mers. The k-mers are sorted along with their start positions, de-replicated, and then prefixed, to minimize data duplication. The prefixes are indexed and an identical data structure is created for the start positions to mimic that of the k-mer data structure.During the Intersection phase -- which is the most computationally intensive phase -- as an all-vs-all comparison is made, the number of comparisons is first reduced by four methods: (a) Prefix restriction, (b) Overlap detection, (c) Overlap restriction, and (d) Result recording. In Prefix restriction, only k-mers of the same prefix are compared. Within that group, potential overlap of k-mer suffixes that would result in a non-empty set intersection are screened for. If such an overlap exists, the region which intersects is

  3. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani S.

    1992-01-01

    In June 1991, a paper at the fifteenth NASA Propagation Experimenters Meeting (NAPEX 15) was presented outlining the development of a database for propagation models. The database is designed to allow the scientists and experimenters in the propagation field to process their data through any known and accepted propagation model. The architecture of the database also incorporates the possibility of changing the standard models in the database to fit the scientist's or the experimenter's needs. The database not only provides powerful software to process the data generated by the experiments, but is also a time- and energy-saving tool for plotting results, generating tables, and producing impressive and crisp hard copy for presentation and filing.

  4. Natural-language access to databases-theoretical/technical issues

    SciTech Connect

    Moore, R.C.

    1982-01-01

    Although there have been many experimental systems for natural-language access to databases, with some now going into actual use, many problems in this area remain to be solved. The author presents descriptions of five problem areas that seem to me not to be adequately handled by any existing system.

  5. Rocky Mountain Basins Produced Water Database

    DOE Data Explorer

    Historical records for produced water data were collected from multiple sources, including Amoco, British Petroleum, Anadarko Petroleum Corporation, United States Geological Survey (USGS), Wyoming Oil and Gas Commission (WOGC), Denver Earth Resources Library (DERL), Bill Barrett Corporation, Stone Energy, and other operators. In addition, 86 new samples were collected during the summers of 2003 and 2004 from the following areas: Waltman-Cave Gulch, Pinedale, Tablerock and Wild Rose. Samples were tested for standard seven component "Stiff analyses", and strontium and oxygen isotopes. 16,035 analyses were winnowed to 8028 unique records for 3276 wells after a data screening process was completed. [Copied from the Readme document in the zipped file available at http://www.netl.doe.gov/technologies/oil-gas/Software/database.html] Save the Zipped file to your PC. When opened, it will contain four versions of the database: ACCESS, EXCEL, DBF, and CSV formats. The information consists of detailed water analyses from basins in the Rocky Mountain region.

  6. The methodology of database design in organization management systems

    NASA Astrophysics Data System (ADS)

    Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.

    2017-01-01

    The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.

  7. The Berlin Emissivity Database

    NASA Astrophysics Data System (ADS)

    Helbert, Jorn

    Remote sensing infrared spectroscopy is the principal field of investigation for planetary surfaces composition. Past, present and future missions to the solar system bodies include in their payload instruments measuring the emerging radiation in the infrared range. TES on Mars Global Surveyor and THEMIS on Mars Odyssey have in many ways changed our views of Mars. The PFS instrument on the ESA Mars Express mission has collected spectra since the beginning of 2004. In spring 2006 the VIRTIS experiment started its operation on the ESA Venus Express mission, allowing for the first time to map the surface of Venus using the 1 µm emission from the surface. The MERTIS spectrometer is included in the payload of the ESA BepiColombo mission to Mercury, scheduled for 2013. For the interpretation of the measured data an emissivity spectral library of planetary analogue materials is needed. The Berlin Emissivity Database (BED) presented here is focused on relatively fine-grained size separates, providing a realistic basis for interpretation of thermal emission spectra of planetary regoliths. The BED is therefore complimentary to existing thermal emission libraries, like the ASU library for example. The BED contains currently entries for plagioclase and potassium feldspars, low Ca and high Ca pyroxenes, olivine, elemental sulphur, common martian analogues (JSC Mars-1, Salten Skov, palagonites, montmorillonite) and a lunar highland soil sample measured in the wavelength range from 3 to 50 µm as a function of particle size. For each sample, the spectra of four well defined particle size separates (¡25 µm , 25-63 µm, 63-125 µm, 125-250 µm) are measured with a 4 cm-1 spectral resolution. These size separates have been selected as typical representations for most of the planetary surfaces. Following an ongoing upgrade of the Planetary Emmissivity Laboratory (PEL) at DLR in Berlin measurements can be obtained at temperatures up to 500° C - realistic for the dayside conditions

  8. Database of Properties of Meteors

    NASA Technical Reports Server (NTRS)

    Suggs, Rob; Anthea, Coster

    2006-01-01

    A database of properties of meteors, and software that provides access to the database, are being developed as a contribution to continuing efforts to model the characteristics of meteors with increasing accuracy. Such modeling is necessary for evaluation of the risk of penetration of spacecraft by meteors. For each meteor in the database, the record will include an identification, date and time, radiant properties, ballistic coefficient, radar cross section, size, density, and orbital elements. The property of primary interest in the present case is density, and one of the primary goals in this case is to derive densities of meteors from their atmospheric decelerations. The database and software are expected to be valid anywhere in the solar system. The database will incorporate new data plus results of meteoroid analyses that, heretofore, have not been readily available to the aerospace community. Taken together, the database and software constitute a model that is expected to provide improved estimates of densities and to result in improved risk analyses for interplanetary spacecraft. It is planned to distribute the database and software on a compact disk.

  9. Unifying Memory and Database Transactions

    NASA Astrophysics Data System (ADS)

    Dias, Ricardo J.; Lourenço, João M.

    Software Transactional Memory is a concurrency control technique gaining increasing popularity, as it provides high-level concurrency control constructs and eases the development of highly multi-threaded applications. But this easiness comes at the expense of restricting the operations that can be executed within a memory transaction, and operations such as terminal and file I/O are either not allowed or incur in serious performance penalties. Database I/O is another example of operations that usually are not allowed within a memory transaction. This paper proposes to combine memory and database transactions in a single unified model, benefiting from the ACID properties of the database transactions and from the speed of main memory data processing. The new unified model covers, without differentiating, both memory and database operations. Thus, the users are allowed to freely intertwine memory and database accesses within the same transaction, knowing that the memory and database contents will always remain consistent and that the transaction will atomically abort or commit the operations in both memory and database. This approach allows to increase the granularity of the in-memory atomic actions and hence, simplifies the reasoning about them.

  10. Databases of the marine metagenomics.

    PubMed

    Mineta, Katsuhiko; Gojobori, Takashi

    2016-02-01

    The metagenomic data obtained from marine environments is significantly useful for understanding marine microbial communities. In comparison with the conventional amplicon-based approach of metagenomics, the recent shotgun sequencing-based approach has become a powerful tool that provides an efficient way of grasping a diversity of the entire microbial community at a sampling point in the sea. However, this approach accelerates accumulation of the metagenome data as well as increase of data complexity. Moreover, when metagenomic approach is used for monitoring a time change of marine environments at multiple locations of the seawater, accumulation of metagenomics data will become tremendous with an enormous speed. Because this kind of situation has started becoming of reality at many marine research institutions and stations all over the world, it looks obvious that the data management and analysis will be confronted by the so-called Big Data issues such as how the database can be constructed in an efficient way and how useful knowledge should be extracted from a vast amount of the data. In this review, we summarize the outline of all the major databases of marine metagenome that are currently publically available, noting that database exclusively on marine metagenome is none but the number of metagenome databases including marine metagenome data are six, unexpectedly still small. We also extend our explanation to the databases, as reference database we call, that will be useful for constructing a marine metagenome database as well as complementing important information with the database. Then, we would point out a number of challenges to be conquered in constructing the marine metagenome database.

  11. USGS Dam Removal Science Database

    USGS Publications Warehouse

    Bellmore, J. Ryan; Vittum, Katherine; Duda, Jeff J.; Greene, Samantha L.

    2015-01-01

    This database is the result of an extensive literature search aimed at identifying documents relevant to the emerging field of dam removal science. In total the database contains 179 citations that contain empirical monitoring information associated with 130 different dam removals across the United States and abroad. Data includes publications through 2014 and supplemented with the U.S. Army Corps of Engineers National Inventory of Dams database, U.S. Geological Survey National Water Information System and aerial photos to estimate locations when coordinates were not provided. Publications were located using the Web of Science, Google Scholar, and Clearinghouse for Dam Removal Information.

  12. Biological databases for human research.

    PubMed

    Zou, Dong; Ma, Lina; Yu, Jun; Zhang, Zhang

    2015-02-01

    The completion of the Human Genome Project lays a foundation for systematically studying the human genome from evolutionary history to precision medicine against diseases. With the explosive growth of biological data, there is an increasing number of biological databases that have been developed in aid of human-related research. Here we present a collection of human-related biological databases and provide a mini-review by classifying them into different categories according to their data types. As human-related databases continue to grow not only in count but also in volume, challenges are ahead in big data storage, processing, exchange and curation.

  13. Can databasing optimise patient care?

    PubMed

    Trojano, Maria

    2004-09-01

    Long-term, prospective databasing of multiple sclerosis (MS) information provides a useful resource for natural history studies. Furthermore, it is the only way to address the question of whether early treatment eliminates or delays the inevitable and irreversible clinical worsening that is the hallmark of the late phase of the illness. Due to the variable nature of MS, it is useful to monitor large numbers of individuals over time. The limitations of single databases may be overcome by regional, national or international pooling of data. In this paper, the Italian Multiple Sclerosis Database Network (MSDN) and the international web-based MSBase registry are described.

  14. Prototyping a genetics deductive database

    SciTech Connect

    Hearne, C.; Cui, Zhan; Parsons, S.; Hajnal, S.

    1994-12-31

    We are developing a laboratory notebook system known as the Genetics Deductive Database. Currently our prototype provides storage for biological facts and rules with flexible access via an interactive graphical display. We have introduced a formal basis for the representation and reasoning necessary to order genome map data and handle the uncertainty inherent in biological data. We aim to support laboratory activities by introducing an experiment planner into our prototype. The Genetics Deductive Database is built using new database technology which provides an object-oriented conceptual model, a declarative rule language, and a procedural update language. This combination of features allows the implementation of consistency maintenance, automated reasoning, and data verification.

  15. International forensic automotive paint database

    NASA Astrophysics Data System (ADS)

    Bishea, Gregory A.; Buckle, Joe L.; Ryland, Scott G.

    1999-02-01

    The Technical Working Group for Materials Analysis (TWGMAT) is supporting an international forensic automotive paint database. The Federal Bureau of Investigation and the Royal Canadian Mounted Police (RCMP) are collaborating on this effort through TWGMAT. This paper outlines the support and further development of the RCMP's Automotive Paint Database, `Paint Data Query'. This cooperative agreement augments and supports a current, validated, searchable, automotive paint database that is used to identify make(s), model(s), and year(s) of questioned paint samples in hit-and-run fatalities and other associated investigations involving automotive paint.

  16. Database of recent tsunami deposits

    USGS Publications Warehouse

    Peters, Robert; Jaffe, Bruce E.

    2010-01-01

    This report describes a database of sedimentary characteristics of tsunami deposits derived from published accounts of tsunami deposit investigations conducted shortly after the occurrence of a tsunami. The database contains 228 entries, each entry containing data from up to 71 categories. It includes data from 51 publications covering 15 tsunamis distributed between 16 countries. The database encompasses a wide range of depositional settings including tropical islands, beaches, coastal plains, river banks, agricultural fields, and urban environments. It includes data from both local tsunamis and teletsunamis. The data are valuable for interpreting prehistorical, historical, and modern tsunami deposits, and for the development of criteria to identify tsunami deposits in the geologic record.

  17. The Automatic Library Tracking Database

    SciTech Connect

    Fahey, Mark R; Jones, Nicholas A; Hadri, Bilel

    2010-01-01

    A library tracking database has been developed and put into production at the National Institute for Computational Sciences and the Oak Ridge Leadership Computing Facility (both located at Oak Ridge National Laboratory.) The purpose of the library tracking database is to track which libraries are used at link time on Cray XT5 Supercomputers. The database stores the libraries used at link time and also records the executables run in a batch job. With this data, many operationally important questions can be answered such as which libraries are most frequently used and which users are using deprecated libraries or applications. The infrastructure design and reporting mechanisms are presented along with collected production data.

  18. New geothermal database for Utah

    USGS Publications Warehouse

    Blackett, Robert E.; ,

    1993-01-01

    The Utah Geological Survey complied a preliminary database consisting of over 800 records on thermal wells and springs in Utah with temperatures of 20??C or greater. Each record consists of 35 fields, including location of the well or spring, temperature, depth, flow-rate, and chemical analyses of water samples. Developed for applications on personal computers, the database will be useful for geochemical, statistical, and other geothermal related studies. A preliminary map of thermal wells and springs in Utah, which accompanies the database, could eventually incorporate heat-flow information, bottom-hole temperatures from oil and gas wells, traces of Quaternary faults, and locations of young volcanic centers.

  19. Freshwater Biological Traits Database (Final Report)

    EPA Science Inventory

    EPA announced the release of the final report, Freshwater Biological Traits Database. This report discusses the development of a database of freshwater biological traits. The database combines several existing traits databases into an online format. The database is also...

  20. An Examination of Job Skills Posted on Internet Databases: Implications for Information Systems Degree Programs.

    ERIC Educational Resources Information Center

    Liu, Xia; Liu, Lai C.; Koong, Kai S.; Lu, June

    2003-01-01

    Analysis of 300 information technology job postings in two Internet databases identified the following skill categories: programming languages (Java, C/C++, and Visual Basic were most frequent); website development (57% sought SQL and HTML skills); databases (nearly 50% required Oracle); networks (only Windows NT or wide-area/local-area networks);…

  1. Fun Databases: My Top Ten.

    ERIC Educational Resources Information Center

    O'Leary, Mick

    1992-01-01

    Provides reviews of 10 online databases: Consumer Reports; Public Opinion Online; Encyclopedia of Associations; Official Airline Guide Adventure Atlas and Events Calendar; CENDATA; Hollywood Hotline; Fearless Taster; Soap Opera Summaries; and Human Sexuality. (LRW)

  2. Development, databases and the Internet.

    PubMed

    Bard, J B; Davies, J A

    1995-11-01

    There is now a rapidly expanding population of interlinked developmental biology databases on the World Wide Web that can be readily accessed from a desk-top PC using programs such as Netscape or Mosaic. These databases cover popular organisms (Arabidopsis, Caenorhabditis, Drosophila, zebrafish, mouse, etc.) and include gene and protein sequences, lists of mutants, information on resources and techniques, and teaching aids. More complex are databases relating domains of gene expression to embryonic anatomy and these range from existing text-based systems for specific organs such as kidney, to a massive project under development, that will cover gene expression during the whole of mouse embryogenesis. In this brief article, we review selected examples of databases currently available, look forward to what will be available soon, and explain how to gain access to the World Wide Web.

  3. SUPERSITES INTEGRATED RELATIONAL DATABASE (SIRD)

    EPA Science Inventory

    As part of EPA's Particulate Matter (PM) Supersites Program (Program), the University of Maryland designed and developed the Supersites Integrated Relational Database (SIRD). Measurement data in SIRD include comprehensive air quality data from the 7 Supersite program locations f...

  4. Freshwater Biological Traits Database (Traits)

    EPA Pesticide Factsheets

    The traits database was compiled for a project on climate change effects on river and stream ecosystems. The traits data, gathered from multiple sources, focused on information published or otherwise well-documented by trustworthy sources.

  5. InterAction Database (IADB)

    Cancer.gov

    The InterAction Database includes demographic and prescription information for more than 500,000 patients in the northern and middle Netherlands and has been integrated with other systems to enhance data collection and analysis.

  6. Marine and Hydrokinetic Technology Database

    DOE Data Explorer

    DOE’s Marine and Hydrokinetic Technology Database provides up-to-date information on marine and hydrokinetic renewable energy, both in the U.S. and around the world. The database includes wave, tidal, current, and ocean thermal energy, and contains information on the various energy conversion technologies, companies active in the field, and development of projects in the water. Depending on the needs of the user, the database can present a snapshot of projects in a given region, assess the progress of a certain technology type, or provide a comprehensive view of the entire marine and hydrokinetic energy industry. Results are displayed as a list of technologies, companies, or projects. Data can be filtered by a number of criteria, including country/region, technology type, generation capacity, and technology or project stage. The database was updated in 2009 to include ocean thermal energy technologies, companies, and projects.

  7. Exploiting relational database technology in a GIS

    NASA Astrophysics Data System (ADS)

    Batty, Peter

    1992-05-01

    All systems for managing data face common problems such as backup, recovery, auditing, security, data integrity, and concurrent update. Other challenges include the ability to share data easily between applications and to distribute data across several computers, whereas continuing to manage the problems already mentioned. Geographic information systems are no exception, and need to tackle all these issues. Standard relational database-management systems (RDBMSs) provide many features to help solve the issues mentioned so far. This paper describes how the IBM geoManager product approaches these issues by storing all its geographic data in a standard RDBMS in order to take advantage of such features. Areas in which standard RDBMS functions need to be extended are highlighted, and the way in which geoManager does this is explained. The performance implications of storing all data in the relational database are discussed. An important distinction is made between the storage and management of geographic data and the manipulation and analysis of geographic data, which needs to be made when considering the applicability of relational database technology to GIS.

  8. Astronomical Surveys, Catalogs, Databases, and Archives

    NASA Astrophysics Data System (ADS)

    Mickaelian, A. M.

    2016-06-01

    All-sky and large-area astronomical surveys and their cataloged data over the whole range of electromagnetic spectrum are reviewed, from γ-ray to radio, such as Fermi-GLAST and INTEGRAL in γ-ray, ROSAT, XMM and Chandra in X-ray, GALEX in UV, SDSS and several POSS I and II based catalogues (APM, MAPS, USNO, GSC) in optical range, 2MASS in NIR, WISE and AKARI IRC in MIR, IRAS and AKARI FIS in FIR, NVSS and FIRST in radio and many others, as well as most important surveys giving optical images (DSS I and II, SDSS, etc.), proper motions (Tycho, USNO, Gaia), variability (GCVS, NSVS, ASAS, Catalina, Pan-STARRS) and spectroscopic data (FBS, SBS, Case, HQS, HES, SDSS, CALIFA, GAMA). Most important astronomical databases and archives are reviewed as well, including Wide-Field Plate DataBase (WFPDB), ESO, HEASARC, IRSA and MAST archives, CDS SIMBAD, VizieR and Aladin, NED and HyperLEDA extragalactic databases, ADS and astro-ph services. They are powerful sources for many-sided efficient research using Virtual Observatory tools. Using and analysis of Big Data accumulated in astronomy lead to many new discoveries.

  9. Small Business Innovations (Integrated Database)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Because of the diversity of NASA's information systems, it was necessary to develop DAVID as a central database management system. Under a Small Business Innovation Research (SBIR) grant, Ken Wanderman and Associates, Inc. designed software tools enabling scientists to interface with DAVID and commercial database management systems, as well as artificial intelligence programs. The software has been installed at a number of data centers and is commercially available.

  10. Atomic and Molecular Databases, VAMDC

    NASA Astrophysics Data System (ADS)

    Dubernet, M. L.; Zwölf, C. M.; Moreau, N.; Ba, Y. A.

    2016-10-01

    The VAMDC Consortium is a worldwide consortium which federates Atomic and Molecular databases through an e-science infrastructure and a political organisation. About 90% of the inter-connected databases handle data that are used for the interpretation of spectra and for the modeling of media of many fields of astrophysics. This paper presents how the VAMDC Consortium is organised in order to publish atomic and molecular data for astrophysics.

  11. Air Compliance Complaint Database (ACCD)

    EPA Pesticide Factsheets

    THIS DATA ASSET NO LONGER ACTIVE: This is metadata documentation for the Region 7 Air Compliance Complaint Database (ACCD) which logs all air pollution complaints received by Region 7. It contains information about the complaint along with how the complaint was addressed. The Air and Waste Management Division is the primary managing entity for this database. This work falls under objectives for EPA's 2003-2008 Strategic Plan (Goal 1) for Clean Air & Global Climate Change, which are to achieve healthier outdoor air.

  12. Ariel Database Rule System Project

    DTIC Science & Technology

    1992-01-14

    NOTES EL CT a Distribution unlimited UL 13. ABSTRACT (Mmmuum 200 we~ The Ariel project has culminated in several advancements in active database...4] Moez Chaabouni. A top-level discrimination network for database rule systems. Master’s thesis, Dept. of Computer Science and Eng., Wright State... Moez Chaabouni. The IBS-tree: A data structure for finding all intervals that overlap a point. Technical Report WSU-CS-90-11, Dept. of Computer

  13. World electric power plants database

    SciTech Connect

    2006-06-15

    This global database provides records for 104,000 generating units in over 220 countries. These units include installed and projected facilities, central stations and distributed plants operated by utilities, independent power companies and commercial and self-generators. Each record includes information on: geographic location and operating company; technology, fuel and boiler; generator manufacturers; steam conditions; unit capacity and age; turbine/engine; architect/engineer and constructor; and pollution control equipment. The database is issued quarterly.

  14. DMTB: the magnetotactic bacteria database

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Lin, W.

    2012-12-01

    Magnetotactic bacteria (MTB) are of interest in biogeomagnetism, rock magnetism, microbiology, biomineralization, and advanced magnetic materials because of their ability to synthesize highly ordered intracellular nano-sized magnetic minerals, magnetite or greigite. Great strides for MTB studies have been made in the past few decades. More than 600 articles concerning MTB have been published. These rapidly growing data are stimulating cross disciplinary studies in such field as biogeomagnetism. We have compiled the first online database for MTB, i.e., Database of Magnestotactic Bacteria (DMTB, http://database.biomnsl.com). It contains useful information of 16S rRNA gene sequences, oligonucleotides, and magnetic properties of MTB, and corresponding ecological metadata of sampling sites. The 16S rRNA gene sequences are collected from the GenBank database, while all other data are collected from the scientific literature. Rock magnetic properties for both uncultivated and cultivated MTB species are also included. In the DMTB database, data are accessible through four main interfaces: Site Sort, Phylo Sort, Oligonucleotides, and Magnetic Properties. References in each entry serve as links to specific pages within public databases. The online comprehensive DMTB will provide a very useful data resource for researchers from various disciplines, e.g., microbiology, rock magnetism and paleomagnetism, biogeomagnetism, magnetic material sciences and others.

  15. The new international GLE database

    NASA Astrophysics Data System (ADS)

    Duldig, M. L.; Watts, D. J.

    2001-08-01

    The Australian Antarctic Division has agreed to host the international GLE database. Access to the database is via a world-wide-web interface and initially covers all GLEs since the start of the 22nd solar cycle. Access restriction for recent events is controlled by password protection and these data are available only to those groups contributing data to the database. The restrictions to data will be automatically removed for events older than 2 years, in accordance with the data exchange provisions of the Antarctic Treaty. Use of the data requires acknowledgment of the database as the source of the data and acknowledgment of the specific groups that provided the data used. Furthermore, some groups that provide data to the database have specific acknowledgment requirements or wording. A new submission format has been developed that will allow easier exchange of data, although the old format will be acceptable for some time. Data download options include direct web based download and email. Data may also be viewed as listings or plots with web browsers. Search options have also been incorporated. Development of the database will be ongoing with extension to viewing and delivery options, addition of earlier data and the development of mirror sites. It is expected that two mirror sites, one in North America and one in Europe, will be developed to enable fast access for the whole cosmic ray community.

  16. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  17. Rice Glycosyltransferase (GT) Phylogenomic Database

    DOE Data Explorer

    Ronald, Pamela

    The Ronald Laboratory staff at the University of California-Davis has a primary research focus on the genes of the rice plant. They study the role that genetics plays in the way rice plants respond to their environment. They created the Rice GT Database in order to integrate functional genomic information for putative rice Glycosyltransferases (GTs). This database contains information on nearly 800 putative rice GTs (gene models) identified by sequence similarity searches based on the Carbohydrate Active enZymes (CAZy) database. The Rice GT Database provides a platform to display user-selected functional genomic data on a phylogenetic tree. This includes sequence information, mutant line information, expression data, etc. An interactive chromosomal map shows the position of all rice GTs, and links to rice annotation databases are included. The format is intended to "facilitate the comparison of closely related GTs within different families, as well as perform global comparisons between sets of related families." [From http://ricephylogenomics.ucdavis.edu/cellwalls/gt/genInfo.shtml] See also the primary paper discussing this work: Peijian Cao, Laura E. Bartley, Ki-Hong Jung and Pamela C. Ronalda. Construction of a Rice Glycosyltransferase Phylogenomic Database and Identification of Rice-Diverged Glycosyltransferases. Molecular Plant, 2008, 1(5): 858-877.

  18. All Conservation Opportunity Areas (ECO.RES.ALL_OP_AREAS)

    EPA Pesticide Factsheets

    The All_OP_Areas GIS layer are all the Conservation Opportunity Areas identified by MoRAP (produced for EPA Region 7). They designate areas with potential for forest, grassland and forest/grassland mosaic conservation. These are areas of natural or semi-natural forest land cover that are at least 75 meters away from roads and away from patch edges. OAs were modeled by creating distance grids using the National Land Cover Database and the Census Bureau's TIGER roads files.

  19. Electron Inelastic-Mean-Free-Path Database

    National Institute of Standards and Technology Data Gateway

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  20. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, Elisabeth; Adler, Silke; Ungersböck, Markus; Zach-Hermann, Susanne

    2010-05-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". The Societas Meteorologicae Palatinae at Mannheim well known for its first European wide meteorological network also established a phenological network which was active from 1781 to 1792. Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies, as one has to address many National Observations Programs (NOP) to get access to the data before one can start to bring them in a uniform style. From 2004 to 2005 the COST-action 725 was running with the main objective to establish a European reference data set of phenological observations that can be used for climatological purposes, especially climate monitoring, and detection of changes. So far the common database/reference data set of COST725 comprises 7687248 data from 7285 observation sites in 15 countries and International Phenological Gardens (IPG) spanning the timeframe from 1951 to 2000. ZAMG is hosting the database. In January 2010 PEP725 has started and will take over not only the part of maintaining, updating the database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of

  1. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, E.; Adler, S.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2010-09-01

    Europe is in the fortunate situation that it has a long tradition in phenological networking: the history of collecting phenological data and using them in climatology has its starting point in 1751 when Carl von Linné outlined in his work Philosophia Botanica methods for compiling annual plant calendars of leaf opening, flowering, fruiting and leaf fall together with climatological observations "so as to show how areas differ". Recently in most European countries, phenological observations have been carried out routinely for more than 50 years by different governmental and non governmental organisations and following different observation guidelines, the data stored at different places in different formats. This has been really hampering pan European studies as one has to address many network operators to get access to the data before one can start to bring them in a uniform style. From 2004 to 2009 the COST-action 725 established a European wide data set of phenological observations. But the deliverables of this COST action was not only the common phenological database and common observation guidelines - COST725 helped to trigger a revival of some old networks and to establish new ones as for instance in Sweden. At the end of 2009 the COST action the database comprised about 8 million data in total from 15 European countries plus the data from the International Phenological Gardens IPG. In January 2010 PEP725 began its work as follow up project with funding from EUMETNET the network of European meteorological services and of ZAMG the Austrian national meteorological service. PEP725 not only will take over the part of maintaining, updating the COST725 database, but also to bring in phenological data from the time before 1951, developing better quality checking procedures and ensuring an open access to the database. An attractive webpage will make phenology and climate impacts on vegetation more visible in the public enabling a monitoring of vegetation development.

  2. The IPD and IMGT/HLA database: allele variant databases.

    PubMed

    Robinson, James; Halliwell, Jason A; Hayhurst, James D; Flicek, Paul; Parham, Peter; Marsh, Steven G E

    2015-01-01

    The Immuno Polymorphism Database (IPD) was developed to provide a centralized system for the study of polymorphism in genes of the immune system. Through the IPD project we have established a central platform for the curation and publication of locus-specific databases involved either directly or related to the function of the Major Histocompatibility Complex in a number of different species. We have collaborated with specialist groups or nomenclature committees that curate the individual sections before they are submitted to IPD for online publication. IPD consists of five core databases, with the IMGT/HLA Database as the primary database. Through the work of the various nomenclature committees, the HLA Informatics Group and in collaboration with the European Bioinformatics Institute we are able to provide public access to this data through the website http://www.ebi.ac.uk/ipd/. The IPD project continues to develop with new tools being added to address scientific developments, such as Next Generation Sequencing, and to address user feedback and requests. Regular updates to the website ensure that new and confirmatory sequences are dispersed to the immunogenetics community, and the wider research and clinical communities.

  3. The IPD and IMGT/HLA database: allele variant databases

    PubMed Central

    Robinson, James; Halliwell, Jason A.; Hayhurst, James D.; Flicek, Paul; Parham, Peter; Marsh, Steven G. E.

    2015-01-01

    The Immuno Polymorphism Database (IPD) was developed to provide a centralized system for the study of polymorphism in genes of the immune system. Through the IPD project we have established a central platform for the curation and publication of locus-specific databases involved either directly or related to the function of the Major Histocompatibility Complex in a number of different species. We have collaborated with specialist groups or nomenclature committees that curate the individual sections before they are submitted to IPD for online publication. IPD consists of five core databases, with the IMGT/HLA Database as the primary database. Through the work of the various nomenclature committees, the HLA Informatics Group and in collaboration with the European Bioinformatics Institute we are able to provide public access to this data through the website http://www.ebi.ac.uk/ipd/. The IPD project continues to develop with new tools being added to address scientific developments, such as Next Generation Sequencing, and to address user feedback and requests. Regular updates to the website ensure that new and confirmatory sequences are dispersed to the immunogenetics community, and the wider research and clinical communities. PMID:25414341

  4. Flybrain neuron database: a comprehensive database system of the Drosophila brain neurons.

    PubMed

    Shinomiya, Kazunori; Matsuda, Keiji; Oishi, Takao; Otsuna, Hideo; Ito, Kei

    2011-04-01

    The long history of neuroscience has accumulated information about numerous types of neurons in the brain of various organisms. Because such neurons have been reported in diverse publications without controlled format, it is not easy to keep track of all the known neurons in a particular nervous system. To address this issue we constructed an online database called Flybrain Neuron Database (Flybrain NDB), which serves as a platform to collect and provide information about all the types of neurons published so far in the brain of Drosophila melanogaster. Projection patterns of the identified neurons in diverse areas of the brain were recorded in a unified format, with text-based descriptions as well as images and movies wherever possible. In some cases projection sites and the distribution of the post- and presynaptic sites were determined with greater detail than described in the original publication. Information about the labeling patterns of various antibodies and expression driver strains to visualize identified neurons are provided as a separate sub-database. We also implemented a novel visualization tool with which users can interactively examine three-dimensional reconstruction of the confocal serial section images with desired viewing angles and cross sections. Comprehensive collection and versatile search function of the anatomical information reported in diverse publications make it possible to analyze possible connectivity between different brain regions. We analyzed the preferential connectivity among optic lobe layers and the plausible olfactory sensory map in the lateral horn to show the usefulness of such a database.

  5. Finnish radon situation analysed using national measurement database.

    PubMed

    Valmari, T; Mäkeläinen, I; Reisbacka, H; Arvela, H

    2011-05-01

    Radiation and Nuclear Safety Authority (STUK) maintains the national indoor radon measurement database in Finland. The analysis of the database material supplements information on radon situation collected by random sampling surveys. The 92,000 dwellings in the database are not a representative sample of the Finnish housing stock. However, the bias is compensated by calculating radon parameters in 1-km(2) cells and weighting the cells by the number of dwellings in the cell. Both the database material and a recent random sampling survey show that radon concentrations in new Finnish houses have been decreasing since the 1990s. This positive trend is clearly stronger in radon-prone areas where preventive measures are nowadays commonly implemented in new construction. The changeover to mechanical supply and exhaust ventilation together with the increase in crawl-space foundations has also contributed to the decrease in the concentrations.

  6. A database of macromolecular motions.

    PubMed Central

    Gerstein, M; Krebs, W

    1998-01-01

    We describe a database of macromolecular motions meant to be of general use to the structural community. The database, which is accessible on the World Wide Web with an entry point at http://bioinfo.mbb.yale.edu/MolMovDB , attempts to systematize all instances of protein and nucleic acid movement for which there is at least some structural information. At present it contains >120 motions, most of which are of proteins. Protein motions are further classified hierarchically into a limited number of categories, first on the basis of size (distinguishing between fragment, domain and subunit motions) and then on the basis of packing. Our packing classification divides motions into various categories (shear, hinge, other) depending on whether or not they involve sliding over a continuously maintained and tightly packed interface. In addition, the database provides some indication about the evidence behind each motion (i.e. the type of experimental information or whether the motion is inferred based on structural similarity) and attempts to describe many aspects of a motion in terms of a standardized nomenclature (e.g. the maximum rotation, the residue selection of a fixed core, etc.). Currently, we use a standard relational design to implement the database. However, the complexity and heterogeneity of the information kept in the database makes it an ideal application for an object-relational approach, and we are moving it in this direction. Specifically, in terms of storing complex information, the database contains plausible representations for motion pathways, derived from restrained 3D interpolation between known endpoint conformations. These pathways can be viewed in a variety of movie formats, and the database is associated with a server that can automatically generate these movies from submitted coordinates. PMID:9722650

  7. Historical hydrology and database on flood events (Apulia, southern Italy)

    NASA Astrophysics Data System (ADS)

    Lonigro, Teresa; Basso, Alessia; Gentile, Francesco; Polemio, Maurizio

    2014-05-01

    Historical data about floods represent an important tool for the comprehension of the hydrological processes, the estimation of hazard scenarios as a basis for Civil Protection purposes, as a basis of the rational land use management, especially in karstic areas, where time series of river flows are not available and the river drainage is rare. The research shows the importance of the improvement of existing flood database with an historical approach, finalized to collect past or historical floods event, in order to better assess the occurrence trend of floods, in the case for the Apulian region (south Italy). The main source of records of flood events for Apulia was the AVI (the acronym means Italian damaged areas) database, an existing Italian database that collects data concerning damaging floods from 1918 to 1996. The database was expanded consulting newspapers, publications, and technical reports from 1996 to 2006. In order to expand the temporal range further data were collected searching in the archives of regional libraries. About 700 useful news from 17 different local newspapers were found from 1876 to 1951. From a critical analysis of the 700 news collected since 1876 to 1952 only 437 were useful for the implementation of the Apulia database. The screening of these news showed the occurrence of about 122 flood events in the entire region. The district of Bari, the regional main town, represents the area in which the great number of events occurred; the historical analysis confirms this area as flood-prone. There is an overlapping period (from 1918 to 1952) between old AVI database and new historical dataset obtained by newspapers. With regard to this period, the historical research has highlighted new flood events not reported in the existing AVI database and it also allowed to add more details to the events already recorded. This study shows that the database is a dynamic instrument, which allows a continuous implementation of data, even in real time

  8. The Chicago Thoracic Oncology Database Consortium: A Multisite Database Initiative

    PubMed Central

    Carey, George B; Tan, Yi-Hung Carol; Bokhary, Ujala; Itkonen, Michelle; Szeto, Kyle; Wallace, James; Campbell, Nicholas; Hensing, Thomas; Salgia, Ravi

    2016-01-01

    Objective: An increasing amount of clinical data is available to biomedical researchers, but specifically designed database and informatics infrastructures are needed to handle this data effectively. Multiple research groups should be able to pool and share this data in an efficient manner. The Chicago Thoracic Oncology Database Consortium (CTODC) was created to standardize data collection and facilitate the pooling and sharing of data at institutions throughout Chicago and across the world. We assessed the CTODC by conducting a proof of principle investigation on lung cancer patients who took erlotinib. This study does not look into epidermal growth factor receptor (EGFR) mutations and tyrosine kinase inhibitors, but rather it discusses the development and utilization of the database involved. Methods:  We have implemented the Thoracic Oncology Program Database Project (TOPDP) Microsoft Access, the Thoracic Oncology Research Program (TORP) Velos, and the TORP REDCap databases for translational research efforts. Standard operating procedures (SOPs) were created to document the construction and proper utilization of these databases. These SOPs have been made available freely to other institutions that have implemented their own databases patterned on these SOPs. Results: A cohort of 373 lung cancer patients who took erlotinib was identified. The EGFR mutation statuses of patients were analyzed. Out of the 70 patients that were tested, 55 had mutations while 15 did not. In terms of overall survival and duration of treatment, the cohort demonstrated that EGFR-mutated patients had a longer duration of erlotinib treatment and longer overall survival compared to their EGFR wild-type counterparts who received erlotinib. Discussion: The investigation successfully yielded data from all institutions of the CTODC. While the investigation identified challenges, such as the difficulty of data transfer and potential duplication of patient data, these issues can be resolved

  9. REDIdb: the RNA editing database.

    PubMed

    Picardi, Ernesto; Regina, Teresa Maria Rosaria; Brennicke, Axel; Quagliariello, Carla

    2007-01-01

    The RNA Editing Database (REDIdb) is an interactive, web-based database created and designed with the aim to allocate RNA editing events such as substitutions, insertions and deletions occurring in a wide range of organisms. The database contains both fully and partially sequenced DNA molecules for which editing information is available either by experimental inspection (in vitro) or by computational detection (in silico). Each record of REDIdb is organized in a specific flat-file containing a description of the main characteristics of the entry, a feature table with the editing events and related details and a sequence zone with both the genomic sequence and the corresponding edited transcript. REDIdb is a relational database in which the browsing and identification of editing sites has been simplified by means of two facilities to either graphically display genomic or cDNA sequences or to show the corresponding alignment. In both cases, all editing sites are highlighted in colour and their relative positions are detailed by mousing over. New editing positions can be directly submitted to REDIdb after a user-specific registration to obtain authorized secure access. This first version of REDIdb database stores 9964 editing events and can be freely queried at http://biologia.unical.it/py_script/search.html.

  10. WDDD: Worm Developmental Dynamics Database.

    PubMed

    Kyoda, Koji; Adachi, Eru; Masuda, Eriko; Nagai, Yoko; Suzuki, Yoko; Oguro, Taeko; Urai, Mitsuru; Arai, Ryoko; Furukawa, Mari; Shimada, Kumiko; Kuramochi, Junko; Nagai, Eriko; Onami, Shuichi

    2013-01-01

    During animal development, cells undergo dynamic changes in position and gene expression. A collection of quantitative information about morphological dynamics under a wide variety of gene perturbations would provide a rich resource for understanding the molecular mechanisms of development. Here, we created a database, the Worm Developmental Dynamics Database (http://so.qbic.riken.jp/wddd/), which stores a collection of quantitative information about cell division dynamics in early Caenorhabditis elegans embryos with single genes silenced by RNA-mediated interference. The information contains the three-dimensional coordinate values of the outlines of nuclear regions and the dynamics of the outlines over time. The database provides free access to 50 sets of quantitative data for wild-type embryos and 136 sets of quantitative data for RNA-mediated interference embryos corresponding to 72 of the 97 essential embryonic genes on chromosome III. The database also provides sets of four-dimensional differential interference contrast microscopy images on which the quantitative data were based. The database will provide a novel opportunity for the development of computational methods to obtain fresh insights into the mechanisms of development. The quantitative information and microscopy images can be synchronously viewed through a web browser, which is designed for easy access by experimental biologists.

  11. An Alaska Soil Carbon Database

    NASA Astrophysics Data System (ADS)

    Johnson, Kristofer; Harden, Jennifer

    2009-05-01

    Database Collaborator's Meeting; Fairbanks, Alaska, 4 March 2009; Soil carbon pools in northern high-latitude regions and their response to climate changes are highly uncertain, and collaboration is required from field scientists and modelers to establish baseline data for carbon cycle studies. The Global Change Program at the U.S. Geological Survey has funded a 2-year effort to establish a soil carbon network and database for Alaska based on collaborations from numerous institutions. To initiate a community effort, a workshop for the development of an Alaska soil carbon database was held at the University of Alaska Fairbanks. The database will be a resource for spatial and biogeochemical models of Alaska ecosystems and will serve as a prototype for a nationwide community project: the National Soil Carbon Network (http://www.soilcarb.net). Studies will benefit from the combination of multiple academic and government data sets. This collaborative effort is expected to identify data gaps and uncertainties more comprehensively. Future applications of information contained in the database will identify specific vulnerabilities of soil carbon in Alaska to climate change, disturbance, and vegetation change.

  12. Developing a DNA variant database.

    PubMed

    Fung, David C Y

    2008-01-01

    Disease- and locus-specific variant databases have been a valuable resource to clinical and research geneticists. With the recent rapid developments in technologies, the number of DNA variants detected in a typical molecular genetics laboratory easily exceeds 1,000. To keep track of the growing inventory of DNA variants, many laboratories employ information technology to store the data as well as distributing the data and its associated information to clinicians and researchers via the Web. While it is a valuable resource, the hosting of a web-accessible database requires collaboration between bioinformaticians and biologists and careful planning to ensure its usability and availability. In this chapter, a series of tutorials on building a local DNA variant database out of a sample dataset will be provided. However, this tutorial will not include programming details on building a web interface and on constructing the web application necessary for web hosting. Instead, an introduction to the two commonly used methods for hosting web-accessible variant databases will be described. Apart from the tutorials, this chapter will also consider the resources and planning required for making a variant database project successful.

  13. WDDD: Worm Developmental Dynamics Database

    PubMed Central

    Kyoda, Koji; Adachi, Eru; Masuda, Eriko; Nagai, Yoko; Suzuki, Yoko; Oguro, Taeko; Urai, Mitsuru; Arai, Ryoko; Furukawa, Mari; Shimada, Kumiko; Kuramochi, Junko; Nagai, Eriko; Onami, Shuichi

    2013-01-01

    During animal development, cells undergo dynamic changes in position and gene expression. A collection of quantitative information about morphological dynamics under a wide variety of gene perturbations would provide a rich resource for understanding the molecular mechanisms of development. Here, we created a database, the Worm Developmental Dynamics Database (http://so.qbic.riken.jp/wddd/), which stores a collection of quantitative information about cell division dynamics in early Caenorhabditis elegans embryos with single genes silenced by RNA-mediated interference. The information contains the three-dimensional coordinate values of the outlines of nuclear regions and the dynamics of the outlines over time. The database provides free access to 50 sets of quantitative data for wild-type embryos and 136 sets of quantitative data for RNA-mediated interference embryos corresponding to 72 of the 97 essential embryonic genes on chromosome III. The database also provides sets of four-dimensional differential interference contrast microscopy images on which the quantitative data were based. The database will provide a novel opportunity for the development of computational methods to obtain fresh insights into the mechanisms of development. The quantitative information and microscopy images can be synchronously viewed through a web browser, which is designed for easy access by experimental biologists. PMID:23172286

  14. The Giardia genome project database.

    PubMed

    McArthur, A G; Morrison, H G; Nixon, J E; Passamaneck, N Q; Kim, U; Hinkle, G; Crocker, M K; Holder, M E; Farr, R; Reich, C I; Olsen, G E; Aley, S B; Adam, R D; Gillin, F D; Sogin, M L

    2000-08-15

    The Giardia genome project database provides an online resource for Giardia lamblia (WB strain, clone C6) genome sequence information. The database includes edited single-pass reads, the results of BLASTX searches, and details of progress towards sequencing the entire 12 million-bp Giardia genome. Pre-sorted BLASTX results can be retrieved based on keyword searches and BLAST searches of the high throughput Giardia data can be initiated from the web site or through NCBI. Descriptions of the genomic DNA libraries, project protocols and summary statistics are also available. Although the Giardia genome project is ongoing, new sequences are made available on a bi-monthly basis to ensure that researchers have access to information that may assist them in the search for genes and their biological function. The current URL of the Giardia genome project database is www.mbl.edu/Giardia.

  15. Searching NCBI Databases Using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-10-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  16. Searching NCBI databases using Entrez.

    PubMed

    Baxevanis, Andreas D

    2008-12-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two Basic Protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An Alternate Protocol builds upon the first Basic Protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The Support Protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  17. Searching NCBI databases using Entrez.

    PubMed

    Gibney, Gretchen; Baxevanis, Andreas D

    2011-06-01

    One of the most widely used interfaces for the retrieval of information from biological databases is the NCBI Entrez system. Entrez capitalizes on the fact that there are pre-existing, logical relationships between the individual entries found in numerous public databases. The existence of such natural connections, mostly biological in nature, argued for the development of a method through which all the information about a particular biological entity could be found without having to sequentially visit and query disparate databases. Two basic protocols describe simple, text-based searches, illustrating the types of information that can be retrieved through the Entrez system. An alternate protocol builds upon the first basic protocol, using additional, built-in features of the Entrez system, and providing alternative ways to issue the initial query. The support protocol reviews how to save frequently issued queries. Finally, Cn3D, a structure visualization tool, is also discussed.

  18. Stratospheric emissions effects database development

    NASA Technical Reports Server (NTRS)

    Baughcum, Steven L.; Henderson, Stephen C.; Hertel, Peter S.; Maggiora, Debra R.; Oncina, Carlos A.

    1994-01-01

    This report describes the development of a stratospheric emissions effects database (SEED) of aircraft fuel burn and emissions from projected Year 2015 subsonic aircraft fleets and from projected fleets of high-speed civil transports (HSCT's). This report also describes the development of a similar database of emissions from Year 1990 scheduled commercial passenger airline and air cargo traffic. The objective of this work was to initiate, develop, and maintain an engineering database for use by atmospheric scientists conducting the Atmospheric Effects of Stratospheric Aircraft (AESA) modeling studies. Fuel burn and emissions of nitrogen oxides (NO(x) as NO2), carbon monoxide, and hydrocarbons (as CH4) have been calculated on a 1-degree latitude x 1-degree longitude x 1-kilometer altitude grid and delivered to NASA as electronic files. This report describes the assumptions and methodology for the calculations and summarizes the results of these calculations.

  19. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  20. The Life Support Database system

    NASA Technical Reports Server (NTRS)

    Likens, William C.

    1991-01-01

    The design and implementation of the database system are described with specific reference to data available from the Build-1 version and techniques for its utilization. The review of the initial documents for the Life Support Database is described in terms of title format and sequencing, and the users are defined as participants in NASA-sponsored life-support research. The software and hardware selections are based respectively on referential integrity and compatibility, and the implementation of the user interface is achieved by means of an applications-programming tool. The current Beta-Test implementation of the system includes several thousand acronyms and bibliographic references as well as chemical properties and exposure limits, equipment, construction materials, and mission data. In spite of modifications in the database the system is found to be effective and a potentially significant resource for the aerospace community.

  1. DOE Global Energy Storage Database

    DOE Data Explorer

    The DOE International Energy Storage Database has more than 400 documented energy storage projects from 34 countries around the world. The database provides free, up-to-date information on grid-connected energy storage projects and relevant state and federal policies. More than 50 energy storage technologies are represented worldwide, including multiple battery technologies, compressed air energy storage, flywheels, gravel energy storage, hydrogen energy storage, pumped hydroelectric, superconducting magnetic energy storage, and thermal energy storage. The policy section of the database shows 18 federal and state policies addressing grid-connected energy storage, from rules and regulations to tariffs and other financial incentives. It is funded through DOE’s Sandia National Laboratories, and has been operating since January 2012.

  2. A veterinary digital anatomical database.

    PubMed

    Snell, J R; Green, R; Stott, G; Van Baerle, S

    1991-01-01

    This paper describes the Veterinary Digital Anatomical Database Project. The purpose of the project is to investigate the construction and use of digitally stored anatomical models. We will be discussing the overall project goals and the results to date. Digital anatomical models are 3 dimensional, solid model representations of normal anatomy. The digital representations are electronically stored and can be manipulated and displayed on a computer graphics workstation. A digital database of anatomical structures can be used in conjunction with gross dissection in teaching normal anatomy to first year students in the professional curriculum. The computer model gives students the opportunity to "discover" relationships between anatomical structures that may have been destroyed or may not be obvious in the gross dissection. By using a digital database, the student will have the ability to view and manipulate anatomical structures in ways that are not available through interactive video disk (IVD). IVD constrains the student to preselected views and sections stored on the disk.

  3. National Residential Efficiency Measures Database

    DOE Data Explorer

    The National Residential Efficiency Measures Database is a publicly available, centralized resource of residential building retrofit measures and costs for the U.S. building industry. With support from the U.S. Department of Energy, NREL developed this tool to help users determine the most cost-effective retrofit measures for improving energy efficiency of existing homes. Software developers who require residential retrofit performance and cost data for applications that evaluate residential efficiency measures are the primary audience for this database. In addition, home performance contractors and manufacturers of residential materials and equipment may find this information useful. The database offers the following types of retrofit measures: 1) Appliances, 2) Domestic Hot Water, 3) Enclosure, 4) Heating, Ventilating, and Air Conditioning (HVAC), 5) Lighting, 6) Miscellaneous.

  4. YMDB: the Yeast Metabolome Database

    PubMed Central

    Jewison, Timothy; Knox, Craig; Neveu, Vanessa; Djoumbou, Yannick; Guo, An Chi; Lee, Jacqueline; Liu, Philip; Mandal, Rupasri; Krishnamurthy, Ram; Sinelnikov, Igor; Wilson, Michael; Wishart, David S.

    2012-01-01

    The Yeast Metabolome Database (YMDB, http://www.ymdb.ca) is a richly annotated ‘metabolomic’ database containing detailed information about the metabolome of Saccharomyces cerevisiae. Modeled closely after the Human Metabolome Database, the YMDB contains >2000 metabolites with links to 995 different genes/proteins, including enzymes and transporters. The information in YMDB has been gathered from hundreds of books, journal articles and electronic databases. In addition to its comprehensive literature-derived data, the YMDB also contains an extensive collection of experimental intracellular and extracellular metabolite concentration data compiled from detailed Mass Spectrometry (MS) and Nuclear Magnetic Resonance (NMR) metabolomic analyses performed in our lab. This is further supplemented with thousands of NMR and MS spectra collected on pure, reference yeast metabolites. Each metabolite entry in the YMDB contains an average of 80 separate data fields including comprehensive compound description, names and synonyms, structural information, physico-chemical data, reference NMR and MS spectra, intracellular/extracellular concentrations, growth conditions and substrates, pathway information, enzyme data, gene/protein sequence data, as well as numerous hyperlinks to images, references and other public databases. Extensive searching, relational querying and data browsing tools are also provided that support text, chemical structure, spectral, molecular weight and gene/protein sequence queries. Because of S. cervesiae's importance as a model organism for biologists and as a biofactory for industry, we believe this kind of database could have considerable appeal not only to metabolomics researchers, but also to yeast biologists, systems biologists, the industrial fermentation industry, as well as the beer, wine and spirit industry. PMID:22064855

  5. The new IAGOS Database Portal

    NASA Astrophysics Data System (ADS)

    Boulanger, Damien; Gautron, Benoit; Thouret, Valérie; Fontaine, Alain

    2016-04-01

    IAGOS (In-service Aircraft for a Global Observing System) is a European Research Infrastructure which aims at the provision of long-term, regular and spatially resolved in situ observations of the atmospheric composition. IAGOS observation systems are deployed on a fleet of commercial aircraft. The IAGOS database is an essential part of the global atmospheric monitoring network. It contains IAGOS-core data and IAGOS-CARIBIC (Civil Aircraft for the Regular Investigation of the Atmosphere Based on an Instrument Container) data. The IAGOS Database Portal (http://www.iagos.fr, damien.boulanger@obs-mip.fr) is part of the French atmospheric chemistry data center AERIS (http://www.aeris-data.fr). The new IAGOS Database Portal has been released in December 2015. The main improvement is the interoperability implementation with international portals or other databases in order to improve IAGOS data discovery. In the frame of the IGAS project (IAGOS for the Copernicus Atmospheric Service), a data network has been setup. It is composed of three data centers: the IAGOS database in Toulouse; the HALO research aircraft database at DLR (https://halo-db.pa.op.dlr.de); and the CAMS data center in Jülich (http://join.iek.fz-juelich.de). The CAMS (Copernicus Atmospheric Monitoring Service) project is a prominent user of the IGAS data network. The new portal provides improved and new services such as the download in NetCDF or NASA Ames formats, plotting tools (maps, time series, vertical profiles, etc.) and user management. Added value products are available on the portal: back trajectories, origin of air masses, co-location with satellite data, etc. The link with the CAMS data center, through JOIN (Jülich OWS Interface), allows to combine model outputs with IAGOS data for inter-comparison. Finally IAGOS metadata has been standardized (ISO 19115) and now provides complete information about data traceability and quality.

  6. The Danish Cardiac Rehabilitation Database

    PubMed Central

    Zwisler, Ann-Dorthe; Rossau, Henriette Knold; Nakano, Anne; Foghmar, Sussie; Eichhorst, Regina; Prescott, Eva; Cerqueira, Charlotte; Soja, Anne Merete Boas; Gislason, Gunnar H; Larsen, Mogens Lytken; Andersen, Ulla Overgaard; Gustafsson, Ida; Thomsen, Kristian K; Boye Hansen, Lene; Hammer, Signe; Viggers, Lone; Christensen, Bo; Kvist, Birgitte; Lindström Egholm, Cecilie; May, Ole

    2016-01-01

    Aim of database The Danish Cardiac Rehabilitation Database (DHRD) aims to improve the quality of cardiac rehabilitation (CR) to the benefit of patients with coronary heart disease (CHD). Study population Hospitalized patients with CHD with stenosis on coronary angiography treated with percutaneous coronary intervention, coronary artery bypass grafting, or medication alone. Reporting is mandatory for all hospitals in Denmark delivering CR. The database was initially implemented in 2013 and was fully running from August 14, 2015, thus comprising data at a patient level from the latter date onward. Main variables Patient-level data are registered by clinicians at the time of entry to CR directly into an online system with simultaneous linkage to other central patient registers. Follow-up data are entered after 6 months. The main variables collected are related to key outcome and performance indicators of CR: referral and adherence, lifestyle, patient-related outcome measures, risk factor control, and medication. Program-level online data are collected every third year. Descriptive data Based on administrative data, approximately 14,000 patients with CHD are hospitalized at 35 hospitals annually, with 75% receiving one or more outpatient rehabilitation services by 2015. The database has not yet been running for a full year, which explains the use of approximations. Conclusion The DHRD is an online, national quality improvement database on CR, aimed at patients with CHD. Mandatory registration of data at both patient level as well as program level is done on the database. DHRD aims to systematically monitor the quality of CR over time, in order to improve the quality of CR throughout Denmark to benefit patients. PMID:27822083

  7. Diaretinopathy database –A Gene database for diabetic retinopathy

    PubMed Central

    Vidhya, Gopalakrishnan; Anusha, Bhaskar

    2014-01-01

    Diabetic retinopathy, is a microvascular complication of diabetes mellitus and is a major cause of adult blindness. Despite advances in diagnosis and treatment the pathogenesis of diabetic retinopathy is not well understood. Results from epidemiological studies of diabetic patients suggest that there are familial predispositions to diabetes and to diabetic retinopathy. Therefore the main purpose of this database is to help both scientists and doctors in studying the candidate genes responsible for causing diabetic retinopathy. For each candidate gene official symbol, chromosome map, number of exons, GT-AG introns, motif, polymorphic variation and 3D structure are given respectively. In addition to molecular class and function of these genes, this database also provides links to download the corresponding nucleotide and amino acid sequences in FASTA format which may be further used for computational approaches. Therefore this database will increase the understanding of the genetics underlying the development or progression of diabetic retinopathy and will have an impact on future diagnostic, prevention and intervention strategies. Availability The database is freely available at http: diaretinopathydatabase.com PMID:24966527

  8. SEISMIC-REFLECTOR DATABASE SOFTWARE.

    USGS Publications Warehouse

    Wright, Evelyn L.; Hosom, John-Paul; ,

    1986-01-01

    The seismic data analysis (SDA) software system facilitates generation of marine seismic reflector databases composed of reflector depths, travel times, root-mean-square and interval velocities, geographic coordinates, and identifying information. System processes include digitizing of seismic profiles and velocity semblance curves, merging of velocity and navigation data with profile travel-time data, calculation of reflector depths in meters, profile and map graphic displays, data editing and smoothing, and entry of finalized data into a comprehensive database. An overview of concepts, file structures, and programs is presented.

  9. Data exploration systems for databases

    NASA Technical Reports Server (NTRS)

    Greene, Richard J.; Hield, Christopher

    1992-01-01

    Data exploration systems apply machine learning techniques, multivariate statistical methods, information theory, and database theory to databases to identify significant relationships among the data and summarize information. The result of applying data exploration systems should be a better understanding of the structure of the data and a perspective of the data enabling an analyst to form hypotheses for interpreting the data. This paper argues that data exploration systems need a minimum amount of domain knowledge to guide both the statistical strategy and the interpretation of the resulting patterns discovered by these systems.

  10. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Masuyama, Keiichi

    CD-ROM has rapidly evolved as a new information medium with large capacity, In the U.S. it is predicted that it will become two hundred billion yen market in three years, and thus CD-ROM is strategic target of database industry. Here in Japan the movement toward its commercialization has been active since this year. Shall CD-ROM bussiness ever conquer information market as an on-disk database or electronic publication? Referring to some cases of the applications in the U.S. the author views marketability and the future trend of this new optical disk medium.

  11. DDD: Dynamic Database for Diatomics

    NASA Technical Reports Server (NTRS)

    Schwenke, David

    2004-01-01

    We have developed as web-based database containing spectra of diatomic moiecuies. All data is computed from first principles, and if a user requests data for a molecule/ion that is not in the database, new calculations are automatically carried out on that species. Rotational, vibrational, and electronic transitions are included. Different levels of accuracy can be selected from qualitatively correct to the best calculations that can be carried out. The user can view and modify spectroscopic constants, view potential energy curves, download detailed high temperature linelists, or view synthetic spectra.

  12. Coal quality databases: Practical applications

    SciTech Connect

    Finkelman, R.B.; Gross, P.M.K.

    1999-07-01

    Domestic and worldwide coal use will be influenced by concerns about the effects of coal combustion on the local, regional and global environment. Reliable coal quality data can help decision-makers to better assess risks and determine impacts of coal constituents on technological behavior, economic byproduct recovery, and environmental and human health issues. The US Geological Survey (USGS) maintains an existing coal quality database (COALQUAL) that contains analyses of approximately 14,000 col samples from every major coal-producing basin in the US. For each sample, the database contains results of proximate and ultimate analyses; sulfur form data; and major, minor, and trace element concentrations for approximately 70 elements

  13. Quality control of EUVE databases

    NASA Technical Reports Server (NTRS)

    John, L. M.; Drake, J.

    1992-01-01

    The publicly accessible databases for the Extreme Ultraviolet Explorer include: the EUVE Archive mailserver; the CEA ftp site; the EUVE Guest Observer Mailserver; and the Astronomical Data System node. The EUVE Performance Assurance team is responsible for verifying that these public EUVE databases are working properly, and that the public availability of EUVE data contained therein does not infringe any data rights which may have been assigned. In this poster, we describe the Quality Assurance (QA) procedures we have developed from the approach of QA as a service organization, thus reflecting the overall EUVE philosophy of Quality Assurance integrated into normal operating procedures, rather than imposed as an external, post facto, control mechanism.

  14. The 24th annual Nucleic Acids Research database issue: a look back and upcoming changes

    PubMed Central

    Galperin, Michael Y.; Fernández-Suárez, Xosé M.; Rigden, Daniel J.

    2017-01-01

    This year's Database Issue of Nucleic Acids Research contains 152 papers that include descriptions of 54 new databases and update papers on 98 databases, of which 16 have not been previously featured in NAR. As always, these databases cover a broad range of molecular biology subjects, including genome structure, gene expression and its regulation, proteins, protein domains, and protein–protein interactions. Following the recent trend, an increasing number of new and established databases deal with the issues of human health, from cancer-causing mutations to drugs and drug targets. In accordance with this trend, three recently compiled databases that have been selected by NAR reviewers and editors as ‘breakthrough’ contributions, denovo-db, the Monarch Initiative, and Open Targets, cover human de novo gene variants, disease-related phenotypes in model organisms, and a bioinformatics platform for therapeutic target identification and validation, respectively. We expect these databases to attract the attention of numerous researchers working in various areas of genetics and genomics. Looking back at the past 12 years, we present here the ‘golden set’ of databases that have consistently served as authoritative, comprehensive, and convenient data resources widely used by the entire community and offer some lessons on what makes a successful database. The Database Issue is freely available online at the https://academic.oup.com/nar web site. An updated version of the NAR Molecular Biology Database Collection is available at http://www.oxfordjournals.org/nar/database/a/. PMID:28053160

  15. Federal Register Document Image Database, Volume 1

    National Institute of Standards and Technology Data Gateway

    NIST Federal Register Document Image Database, Volume 1 (PC database for purchase)   NIST has produced a new document image database for evaluating document analysis and recognition technologies and information retrieval systems. NIST Special Database 25 contains page images from the 1994 Federal Register and much more.

  16. Building Databases for Education. ERIC Digest.

    ERIC Educational Resources Information Center

    Klausmeier, Jane A.

    This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…

  17. WMC Database Evaluation. Case Study Report

    SciTech Connect

    Palounek, Andrea P. T

    2015-10-29

    The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.

  18. Online Petroleum Industry Bibliographic Databases: A Review.

    ERIC Educational Resources Information Center

    Anderson, Margaret B.

    This paper discusses the present status of the bibliographic database industry, reviews the development of online databases of interest to the petroleum industry, and considers future developments in online searching and their effect on libraries and information centers. Three groups of databases are described: (1) databases developed by the…

  19. Statistical Profile of Currently Available CD-ROM Database Products.

    ERIC Educational Resources Information Center

    Nicholls, Paul Travis

    1988-01-01

    Survey of currently available CD-ROM products discusses: (1) subject orientation; (2) database type; (3) update frequency; (4) price structure; (5) hardware configuration; (6) retrieval software; and (7) publisher/marketer. Several graphs depict data in these areas. (five references) (MES)

  20. NLTE4 Plasma Population Kinetics Database

    National Institute of Standards and Technology Data Gateway

    SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase)   This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.

  1. Guide on Logical Database Design.

    ERIC Educational Resources Information Center

    Fong, Elizabeth N.; And Others

    This report discusses an iterative methodology for logical database design (LDD). The methodology includes four phases: local information-flow modeling, global information-flow modeling, conceptual schema design, and external schema modeling. These phases are intended to make maximum use of available information and user expertise, including the…

  2. Data-Based Teacher Development.

    ERIC Educational Resources Information Center

    Borg, Simon

    1998-01-01

    Describes how data from English language teaching (ELT) classroom research can be exploited in teacher development activities. The contribution data-based activities can make to teacher development is outlined, and examples that illustrate the principles underlying their design are presented. A case is made for using such activities to facilitate…

  3. Safeguarding Databases Basic Concepts Revisited.

    ERIC Educational Resources Information Center

    Cardinali, Richard

    1995-01-01

    Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)

  4. The New NRL Crystallographic Database

    NASA Astrophysics Data System (ADS)

    Mehl, Michael; Curtarolo, Stefano; Hicks, David; Toher, Cormac; Levy, Ohad; Hart, Gus

    For many years the Naval Research Laboratory maintained an online graphical database of crystal structures for a wide variety of materials. This database has now been redesigned, updated and integrated with the AFLOW framework for high throughput computational materials discovery (http://materials.duke.edu/aflow.html). For each structure we provide an image showing the atomic positions; the primitive vectors of the lattice and the basis vectors of every atom in the unit cell; the space group and Wyckoff positions; Pearson symbols; common names; and Strukturbericht designations, where available. References for each structure are provided, as well as a Crystallographic Information File (CIF). The database currently includes almost 300 entries and will be continuously updated and expanded. It enables easy search of the various structures based on their underlying symmetries, either by Bravais lattice, Pearson symbol, Strukturbericht designation or commonly used prototypes. The talk will describe the features of the database, and highlight its utility for high throughput computational materials design. Work at NRL is funded by a Contract with the Duke University Department of Mechanical Engineering.

  5. Databases and the Professional Evaluator.

    ERIC Educational Resources Information Center

    Schellenberg, Stephen J.

    The role of the professional evaluator within a school district is essentially to provide data for use in informed decision making. In School District 4J in Eugene, Oregon, this role involves performing tasks in three basic categories: (1) maintaining and interpreting ongoing databases, (2) finding and analyzing information to answer specific…

  6. Using Databases in History Teaching.

    ERIC Educational Resources Information Center

    Knight, P.; Timmins, G.

    1986-01-01

    Discusses advantages and limitations of database software in meeting the educational objectives of history instruction; reviews five currently available computer programs (FACTFILE, QUEST, QUARRY BANK 1851, Census Analysis, and Beta Base); highlights major considerations that arise in designing such programs; and describes their classroom use.…

  7. Technostress: Surviving a Database Crash.

    ERIC Educational Resources Information Center

    Dobb, Linda S.

    1990-01-01

    Discussion of technostress in libraries focuses on a database crash at California Polytechnic State University, San Luis Obispo. Steps taken to restore the data are explained, strategies for handling technological accidents are suggested, the impact on library staff is discussed, and a 10-item annotated bibliography on technostress is provided.…

  8. Database Transformations for Biological Applications

    SciTech Connect

    Overton, C.; Davidson, S. B.; Buneman, P.; Tannen, V.

    2001-04-11

    The goal of this project was to develop tools to facilitate data transformations between heterogeneous data sources found throughout biomedical applications. Such transformations are necessary when sharing data between different groups working on related problems as well as when querying data spread over different databases, files and software analysis packages.

  9. Online Databases. ASCII Full Texts.

    ERIC Educational Resources Information Center

    Tenopir, Carol

    1995-01-01

    Defines the American Standard Code for Information Interchange (ASCII) full text, and reviews its past, present, and future uses in libraries. Discusses advantages, disadvantages, and uses of searchable and nonsearchable full-text databases. Also comments on full-text CD-ROM products and on technological advancements made by library vendors. (JMV)

  10. Maize Genetics and Genomics Database

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The 2007 report for MaizeGDB lists the new hires who will focus on curation/outreach and the genome sequence, respectively. Currently all sequence in the database comes from a PlantGDB pipeline and is presented with deep links to external resources such as PlantGDB, Dana Farber, GenBank, the Arizona...

  11. Interactive bibliographical database on color

    NASA Astrophysics Data System (ADS)

    Caivano, Jose L.

    2002-06-01

    The paper describes the methodology and results of a project under development, aimed at the elaboration of an interactive bibliographical database on color in all fields of application: philosophy, psychology, semiotics, education, anthropology, physical and natural sciences, biology, medicine, technology, industry, architecture and design, arts, linguistics, geography, history. The project is initially based upon an already developed bibliography, published in different journals, updated in various opportunities, and now available at the Internet, with more than 2,000 entries. The interactive database will amplify that bibliography, incorporating hyperlinks and contents (indexes, abstracts, keywords, introductions, or eventually the complete document), and devising mechanisms for information retrieval. The sources to be included are: books, doctoral dissertations, multimedia publications, reference works. The main arrangement will be chronological, but the design of the database will allow rearrangements or selections by different fields: subject, Decimal Classification System, author, language, country, publisher, etc. A further project is to develop another database, including color-specialized journals or newsletters, and articles on color published in international journals, arranged in this case by journal name and date of publication, but allowing also rearrangements or selections by author, subject and keywords.

  12. Worldwide Ocean Optics Database (WOOD)

    DTIC Science & Technology

    2001-09-30

    user can obtain values computed from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error ...from empirical algorithms (e.g., beam attenuation estimated from diffuse attenuation and backscatter data). Error estimates will also be provided for...properties, including diffuse attenuation, beam attenuation, and scattering. The database shall be easy to use, Internet accessible, and frequently updated

  13. The NASA Fireball Network Database

    NASA Technical Reports Server (NTRS)

    Moser, Danielle E.

    2011-01-01

    The NASA Meteoroid Environment Office (MEO) has been operating an automated video fireball network since late-2008. Since that time, over 1,700 multi-station fireballs have been observed. A database containing orbital data and trajectory information on all these events has recently been compiled and is currently being mined for information. Preliminary results are presented here.

  14. Begin: Online Database Searching Now!

    ERIC Educational Resources Information Center

    Lodish, Erica K.

    1986-01-01

    Because of the increasing importance of online databases, school library media specialists are encouraged to introduce students to online searching. Four books that would help media specialists gain a basic background are reviewed and it is noted that although they are very technical, they can be adapted to individual needs. (EM)

  15. FLOPROS: an evolving global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, Paolo; Aerts, Jeroen C. J. H.; Jongman, Brenden; Bouwer, Laurens M.; Winsemius, Hessel C.; de Moel, Hans; Ward, Philip J.

    2016-05-01

    With projected changes in climate, population and socioeconomic activity located in flood-prone areas, the global assessment of flood risk is essential to inform climate change policy and disaster risk management. Whilst global flood risk models exist for this purpose, the accuracy of their results is greatly limited by the lack of information on the current standard of protection to floods, with studies either neglecting this aspect or resorting to crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, which comprises information in the form of the flood return period associated with protection measures, at different spatial scales. FLOPROS comprises three layers of information, and combines them into one consistent database. The design layer contains empirical information about the actual standard of existing protection already in place; the policy layer contains information on protection standards from policy regulations; and the model layer uses a validated modelling approach to calculate protection standards. The policy layer and the model layer can be considered adequate proxies for actual protection standards included in the design layer, and serve to increase the spatial coverage of the database. Based on this first version of FLOPROS, we suggest a number of strategies to further extend and increase the resolution of the database. Moreover, as the database is intended to be continually updated, while flood protection standards are changing with new interventions, FLOPROS requires input from the flood risk community. We therefore invite researchers and practitioners to contribute information to this evolving database by corresponding to the authors.

  16. FLOPROS: an evolving global database of flood protection standards

    NASA Astrophysics Data System (ADS)

    Scussolini, P.; Aerts, J. C. J. H.; Jongman, B.; Bouwer, L. M.; Winsemius, H. C.; de Moel, H.; Ward, P. J.

    2015-12-01

    With the projected changes in climate, population and socioeconomic activity located in flood-prone areas, the global assessment of the flood risk is essential to inform climate change policy and disaster risk management. Whilst global flood risk models exist for this purpose, the accuracy of their results is greatly limited by the lack of information on the current standard of protection to floods, with studies either neglecting this aspect or resorting to crude assumptions. Here we present a first global database of FLOod PROtection Standards, FLOPROS, which comprises information in the form of the flood return period associated with protection measures, at different spatial scales. FLOPROS comprises three layers of information, and combines them into one consistent database. The Design layer contains empirical information about the actual standard of existing protection already in place, while the Policy layer and the Model layer are proxies for such protection standards, and serve to increase the spatial coverage of the database. The Policy layer contains information on protection standards from policy regulations; and the Model layer uses a validated modeling approach to calculate protection standards. Based on this first version of FLOPROS, we suggest a number of strategies to further extend and increase the resolution of the database. Moreover, as the database is intended to be continually updated, while flood protection standards are changing with new interventions, FLOPROS requires input from the flood risk community. We therefore invite researchers and practitioners to contribute information to this evolving database by corresponding to the authors.

  17. View discovery in OLAP databases through statistical combinatorial optimization

    SciTech Connect

    Hengartner, Nick W; Burke, John; Critchlow, Terence; Joslyn, Cliff; Hogan, Emilie

    2009-01-01

    OnLine Analytical Processing (OLAP) is a relational database technology providing users with rapid access to summary, aggregated views of a single large database, and is widely recognized for knowledge representation and discovery in high-dimensional relational databases. OLAP technologies provide intuitive and graphical access to the massively complex set of possible summary views available in large relational (SQL) structured data repositories. The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of 'views' of an OLAP database as a combinatorial object of all projections and subsets, and 'view discovery' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline 'hop-chaining' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a 'spiraling' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  18. Danish Colorectal Cancer Group Database

    PubMed Central

    Ingeholm, Peter; Gögenur, Ismail; Iversen, Lene H

    2016-01-01

    Aim of database The aim of the database, which has existed for registration of all patients with colorectal cancer in Denmark since 2001, is to improve the prognosis for this patient group. Study population All Danish patients with newly diagnosed colorectal cancer who are either diagnosed or treated in a surgical department of a public Danish hospital. Main variables The database comprises an array of surgical, radiological, oncological, and pathological variables. The surgeons record data such as diagnostics performed, including type and results of radiological examinations, lifestyle factors, comorbidity and performance, treatment including the surgical procedure, urgency of surgery, and intra- and postoperative complications within 30 days after surgery. The pathologists record data such as tumor type, number of lymph nodes and metastatic lymph nodes, surgical margin status, and other pathological risk factors. Descriptive data The database has had >95% completeness in including patients with colorectal adenocarcinoma with >54,000 patients registered so far with approximately one-third rectal cancers and two-third colon cancers and an overrepresentation of men among rectal cancer patients. The stage distribution has been more or less constant until 2014 with a tendency toward a lower rate of stage IV and higher rate of stage I after introduction of the national screening program in 2014. The 30-day mortality rate after elective surgery has been reduced from >7% in 2001–2003 to <2% since 2013. Conclusion The database is a national population-based clinical database with high patient and data completeness for the perioperative period. The resolution of data is high for description of the patient at the time of diagnosis, including comorbidities, and for characterizing diagnosis, surgical interventions, and short-term outcomes. The database does not have high-resolution oncological data and does not register recurrences after primary surgery. The Danish

  19. Toward An Unstructured Mesh Database

    NASA Astrophysics Data System (ADS)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  20. The CARLSBAD database: a confederated database of chemical bioactivities.

    PubMed

    Mathias, Stephen L; Hines-Kay, Jarrett; Yang, Jeremy J; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G; Ursu, Oleg; Oprea, Tudor I

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical-protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical-protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure-target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure-target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. 'drugs'). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD database

  1. The CARLSBAD Database: A Confederated Database of Chemical Bioactivities

    PubMed Central

    Mathias, Stephen L.; Hines-Kay, Jarrett; Yang, Jeremy J.; Zahoransky-Kohalmi, Gergely; Bologa, Cristian G.; Ursu, Oleg; Oprea, Tudor I.

    2013-01-01

    Many bioactivity databases offer information regarding the biological activity of small molecules on protein targets. Information in these databases is often hard to resolve with certainty because of subsetting different data in a variety of formats; use of different bioactivity metrics; use of different identifiers for chemicals and proteins; and having to access different query interfaces, respectively. Given the multitude of data sources, interfaces and standards, it is challenging to gather relevant facts and make appropriate connections and decisions regarding chemical–protein associations. The CARLSBAD database has been developed as an integrated resource, focused on high-quality subsets from several bioactivity databases, which are aggregated and presented in a uniform manner, suitable for the study of the relationships between small molecules and targets. In contrast to data collection resources, CARLSBAD provides a single normalized activity value of a given type for each unique chemical–protein target pair. Two types of scaffold perception methods have been implemented and are available for datamining: HierS (hierarchical scaffolds) and MCES (maximum common edge subgraph). The 2012 release of CARLSBAD contains 439 985 unique chemical structures, mapped onto 1,420 889 unique bioactivities, and annotated with 277 140 HierS scaffolds and 54 135 MCES chemical patterns, respectively. Of the 890 323 unique structure–target pairs curated in CARLSBAD, 13.95% are aggregated from multiple structure–target values: 94 975 are aggregated from two bioactivities, 14 544 from three, 7 930 from four and 2214 have five bioactivities, respectively. CARLSBAD captures bioactivities and tags for 1435 unique chemical structures of active pharmaceutical ingredients (i.e. ‘drugs’). CARLSBAD processing resulted in a net 17.3% data reduction for chemicals, 34.3% reduction for bioactivities, 23% reduction for HierS and 25% reduction for MCES, respectively. The CARLSBAD

  2. Mars global digital dune database: MC-30

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2012-01-01

    The Mars Global Digital Dune Database (MGD3) provides data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey Open-File Reports. The first report (Hayward and others, 2007) included dune fields from lat 65° N. to 65° S. (http://pubs.usgs.gov/of/2007/1158/). The second report (Hayward and others, 2010) included dune fields from lat 60° N. to 90° N. (http://pubs.usgs.gov/of/2010/1170/). This report encompasses ~75,000 km2 of mapped dune fields from lat 60° to 90° S. The dune fields included in this global database were initially located using Mars Odyssey Thermal Emission Imaging System (THEMIS) Infrared (IR) images. In the previous two reports, some dune fields may have been unintentionally excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100 m/pixel) certainly caused us to exclude smaller dune fields. In this report, mapping is more complete. The Arizona State University THEMIS daytime IR mosaic provided complete IR coverage, and it is unlikely that we missed any large dune fields in the South Pole (SP) region. In addition, the increased availability of higher resolution images resulted in the inclusion of more small (~1 km2) sand dune fields and sand patches. To maintain consistency with the previous releases, we have identified the sand features that would not have been included in earlier releases. While the moderate to large dune fields in MGD3 are likely to constitute the largest compilation of sediment on the planet, we acknowledge that our database excludes numerous small dune fields and some moderate to large dune fields as well. Please note that the absence of mapped dune fields does not mean that dune fields do not exist and is not intended to imply a lack of saltating sand in other areas

  3. Coal database for Cook Inlet and North Slope, Alaska

    USGS Publications Warehouse

    Stricker, Gary D.; Spear, Brianne D.; Sprowl, Jennifer M.; Dietrich, John D.; McCauley, Michael I.; Kinney, Scott A.

    2011-01-01

    This database is a compilation of published and nonconfidential unpublished coal data from Alaska. Although coal occurs in isolated areas throughout Alaska, this study includes data only from the Cook Inlet and North Slope areas. The data include entries from and interpretations of oil and gas well logs, coal-core geophysical logs (such as density, gamma, and resistivity), seismic shot hole lithology descriptions, measured coal sections, and isolated coal outcrops.

  4. The EXOSAT database and archive

    NASA Technical Reports Server (NTRS)

    Reynolds, A. P.; Parmar, A. N.

    1992-01-01

    The EXOSAT database provides on-line access to the results and data products (spectra, images, and lightcurves) from the EXOSAT mission as well as access to data and logs from a number of other missions (such as EINSTEIN, COS-B, ROSAT, and IRAS). In addition, a number of familiar optical, infrared, and x ray catalogs, including the Hubble Space Telescope (HST) guide star catalog are available. The complete database is located at the EXOSAT observatory at ESTEC in the Netherlands and is accessible remotely via a captive account. The database management system was specifically developed to efficiently access the database and to allow the user to perform statistical studies on large samples of astronomical objects as well as to retrieve scientific and bibliographic information on single sources. The system was designed to be mission independent and includes timing, image processing, and spectral analysis packages as well as software to allow the easy transfer of analysis results and products to the user's own institute. The archive at ESTEC comprises a subset of the EXOSAT observations, stored on magnetic tape. Observations of particular interest were copied in compressed format to an optical jukebox, allowing users to retrieve and analyze selected raw data entirely from their terminals. Such analysis may be necessary if the user's needs are not accommodated by the products contained in the database (in terms of time resolution, spectral range, and the finesse of the background subtraction, for instance). Long-term archiving of the full final observation data is taking place at ESRIN in Italy as part of the ESIS program, again using optical media, and ESRIN have now assumed responsibility for distributing the data to the community. Tests showed that raw observational data (typically several tens of megabytes for a single target) can be transferred via the existing networks in reasonable time.

  5. Automatic pattern localization across layout database and photolithography mask

    NASA Astrophysics Data System (ADS)

    Morey, Philippe; Brault, Frederic; Beisser, Eric; Ache, Oliver; Röth, Klaus-Dieter

    2016-03-01

    Advanced process photolithography masks require more and more controls for registration versus design and critical dimension uniformity (CDU). The distribution of the measurement points should be distributed all over the whole mask and may be denser in areas critical to wafer overlay requirements. This means that some, if not many, of theses controls should be made inside the customer die and may use non-dedicated patterns. It is then mandatory to access the original layout database to select patterns for the metrology process. Finding hundreds of relevant patterns in a database containing billions of polygons may be possible, but in addition, it is mandatory to create the complete metrology job fast and reliable. Combining, on one hand, a software expertise in mask databases processing and, on the other hand, advanced skills in control and registration equipment, we have developed a Mask Dataprep Station able to select an appropriate number of measurement targets and their positions in a huge database and automatically create measurement jobs on the corresponding area on the mask for the registration metrology system. In addition, the required design clips are generated from the database in order to perform the rendering procedure on the metrology system. This new methodology has been validated on real production line for the most advanced process. This paper presents the main challenges that we have faced, as well as some results on the global performances.

  6. Databases in geohazard science: An introduction

    NASA Astrophysics Data System (ADS)

    Klose, Martin; Damm, Bodo; Highland, Lynn M.

    2015-11-01

    The key to understanding hazards is to track, record, and analyse them. Geohazard databases play a critical role in each of these steps. As systematically compiled data archives of past and current hazard events, they generally fall in two categories (Tschoegl et al., 2006; UN-BCPR, 2013): (i) natural disaster databases that cover all types of hazards, most often at a continental or global scale (ADCR, 2015; CRED, 2015; Munich Re, 2015), and (ii) type-specific databases for a certain type of hazard, for example, earthquakes (Schulte and Mooney, 2005; Daniell et al., 2011), tsunami (NGDC/WDC, 2015), or volcanic eruptions (Witham, 2005; Geyer and Martí, 2008). With landslides being among the world's most frequent hazard types (Brabb, 1991; Nadim et al., 2006; Alcántara-Ayala, 2014), symbolizing the complexity of Earth system processes (Korup, 2012), the development of landslide inventories occupies centre stage since many years, especially in applied geomorphology (Alexander, 1991; Oya, 2001). As regards the main types of landslide inventories, a distinction is made between event-based and historical inventories (Hervás and Bobrowsky, 2009; Hervás, 2013). Inventories providing data on landslides caused by a single triggering event, for instance, an earthquake, a rainstorm, or a rapid snowmelt, are essential for exploring root causes in terms of direct system responses or cascades of hazards (Malamud et al., 2004; Mondini et al., 2014). Alternatively, historical inventories, which are more common than their counterparts, constitute a pool of data on landslides that occurred in a specific area at local, regional, national, or even global scale over time (Dikau et al., 1996; Guzzetti et al., 2012; Wood et al., 2015).

  7. Mars Global Digital Dune Database; MC-1

    USGS Publications Warehouse

    Hayward, R.K.; Fenton, L.K.; Tanaka, K.L.; Titus, T.N.; Colaprete, A.; Christensen, P.R.

    2010-01-01

    The Mars Global Digital Dune Database presents data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey (USGS) Open-File Reports. The first release (Hayward and others, 2007) included dune fields from 65 degrees N to 65 degrees S (http://pubs.usgs.gov/of/2007/1158/). The current release encompasses ~ 845,000 km2 of mapped dune fields from 65 degrees N to 90 degrees N latitude. Dune fields between 65 degrees S and 90 degrees S will be released in a future USGS Open-File Report. Although we have attempted to include all dune fields, some have likely been excluded for two reasons: (1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or (2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore, the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS), Mars Orbiter Camera narrow angle (MOC NA), or Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) images allowed, we classified dunes and included some dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. It was beyond the scope of this report to look at the detail needed to discern subtle dune modification. It was also

  8. Naval Ship Database: Database Design, Implementation, and Schema

    DTIC Science & Technology

    2013-09-01

    name (which should in theory be reserved for table names). In general, this type of nomenclature is confusing for database design and obfuscates the...an encounter with the below pseudo SQL Sales table when a customer purchases multiple products with a single order. 34 DRDC CORA TN 2013-157...CREATE TABLE Sales ( customer_name, product_id ); A.5 Fifth Normal Form Fifth normal form (5NF), also known as

  9. The Hierarchical Database Decomposition Approach to Database Concurrency Control.

    DTIC Science & Technology

    1984-12-01

    access synchron- ization in a database system. The report develops the theory that supports the correctness of such analysis and algorithms. An...IiThe thesis develops the theory that supports the correctness of such analy- sis and algorithms. An implementation scheme for the proposed algorithm, the...encouraged development of many new algorithms. <For example, Ellis77, Lamport78, Rosenkrantz78, Thomas79, Bernstein8l.> A survey and comparison of theories

  10. Database for Assessment Unit-Scale Analogs (Exclusive of the United States)

    USGS Publications Warehouse

    Charpentier, Ronald R.; Klett, T.R.; Attanasi, E.D.

    2008-01-01

    This publication presents a database of geologic analogs useful for the assessment of undiscovered oil and gas resources. Particularly in frontier areas, where few oil and gas fields have been discovered, assessment methods such as discovery process models may not be usable. In such cases, comparison of the assessment area to geologically similar but more maturely explored areas may be more appropriate. This analog database consists of 246 assessment units, based on the U.S. Geological Survey 2000 World Petroleum Assessment. Besides geologic data to facilitate comparisons, the database includes data pertaining to numbers and sizes of oil and gas fields and the properties of their produced fluids.

  11. Intra- and Inter-database Study for Arabic, English, and German Databases: Do Conventional Speech Features Detect Voice Pathology?

    PubMed

    Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H

    2016-10-10

    A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection.

  12. An event database for rotational seismology

    NASA Astrophysics Data System (ADS)

    Salvermoser, Johannes; Hadziioannou, Celine; Hable, Sarah; Chow, Bryant; Krischer, Lion; Wassermann, Joachim; Igel, Heiner

    2016-04-01

    The ring laser sensor (G-ring) located at Wettzell, Germany, routinely observes earthquake-induced rotational ground motions around a vertical axis since its installation in 2003. Here we present results from a recently installed event database which is the first that will provide ring laser event data in an open access format. Based on the GCMT event catalogue and some search criteria, seismograms from the ring laser and the collocated broadband seismometer are extracted and processed. The ObsPy-based processing scheme generates plots showing waveform fits between rotation rate and transverse acceleration and extracts characteristic wavefield parameters such as peak ground motions, noise levels, Love wave phase velocities and waveform coherence. For each event, these parameters are stored in a text file (json dictionary) which is easily readable and accessible on the website. The database contains >10000 events starting in 2007 (Mw>4.5). It is updated daily and therefore provides recent events at a time lag of max. 24 hours. The user interface allows to filter events for epoch, magnitude, and source area, whereupon the events are displayed on a zoomable world map. We investigate how well the rotational motions are compatible with the expectations from the surface wave magnitude scale. In addition, the website offers some python source code examples for downloading and processing the openly accessible waveforms.

  13. Antarctic Tephra Database (AntT)

    NASA Astrophysics Data System (ADS)

    Kurbatov, A.; Dunbar, N. W.; Iverson, N. A.; Gerbi, C. C.; Yates, M. G.; Kalteyer, D.; McIntosh, W. C.

    2014-12-01

    Modern paleoclimate research is heavily dependent on establishing accurate timing related to rapid shifts in Earth's climate system. The ability to correlate these events at local, and ideally at the intercontinental scales, allows assessment, for example, of phasing or changes in atmospheric circulation. Tephra-producing volcanic eruptions are geologically instantaneous events that are largely independent of climate. We have developed a tephrochronological framework for paleoclimate research in Antarctic in a user friendly, freely accessible online Antarctic tephra (AntT) database (http://cci.um.maine.edu/AntT/). Information about volcanic events, including physical and geochemical characteristics of volcanic products collected from multiple data sources, are integrated into the AntT database.The AntT project establishes a new centralized data repository for Antarctic tephrochronology, which is needed for precise correlation of records between Antarctic ice cores (e.g. WAIS Divide, RICE, Talos Dome, ITASE) and global paleoclimate archives. The AntT will help climatologists, paleoclimatologists, atmospheric chemists, geochemists, climate modelers synchronize paleoclimate archives using volcanic products that establishing timing of climate events in different geographic areas, climate-forcing mechanisms, natural threshold levels in the climate system. All these disciplines will benefit from accurate reconstructions of the temporal and spatial distribution of past rapid climate change events in continental, atmospheric, marine and polar realms. Research is funded by NSF grants: ANT-1142007 and 1142069.

  14. The Condensate Database for Big Data Analysis

    NASA Astrophysics Data System (ADS)

    Gallaher, D. W.; Lv, Q.; Grant, G.; Campbell, G. G.; Liu, Q.

    2014-12-01

    Although massive amounts of cryospheric data have been and are being generated at an unprecedented rate, a vast majority of the otherwise valuable data have been ``sitting in the dark'', with very limited quality assurance or runtime access for higher-level data analytics such as anomaly detection. This has significantly hindered data-driven scientific discovery and advances in the polar research and Earth sciences community. In an effort to solve this problem we have investigated and developed innovative techniques for the construction of ``condensate database'', which is much smaller than the original data yet still captures the key characteristics (e.g., spatio-temporal norm and changes). In addition we are taking advantage of parallel databases that make use of low cost GPU processors. As a result, efficient anomaly detection and quality assurance can be achieved with in-memory data analysis or limited I/O requests. The challenges lie in the fact that cryospheric data are massive and diverse, with normal/abnomal patterns spanning a wide range of spatial and temporal scales. This project consists of investigations in three main areas: (1) adaptive neighborhood-based thresholding in both space and time; (2) compressive-domain pattern detection and change analysis; and (3) hybrid and adaptive condensation of multi-modal, multi-scale cryospheric data.

  15. Construction of an integrated database to support genomic sequence analysis

    SciTech Connect

    Gilbert, W.; Overbeek, R.

    1994-11-01

    The central goal of this project is to develop an integrated database to support comparative analysis of genomes including DNA sequence data, protein sequence data, gene expression data and metabolism data. In developing the logic-based system GenoBase, a broader integration of available data was achieved due to assistance from collaborators. Current goals are to easily include new forms of data as they become available and to easily navigate through the ensemble of objects described within the database. This report comments on progress made in these areas.

  16. The relational clinical database: a possible solution to the star wars in registry systems.

    PubMed

    Michels, D K; Zamieroski, M

    1990-12-01

    In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.

  17. Undergraduate Use of CD-ROM Databases: Observations of Human-Computer Interaction and Relevance Judgments.

    ERIC Educational Resources Information Center

    Shaw, Debora

    1996-01-01

    Describes a study that observed undergraduates as they searched bibliographic databases on a CD-ROM local area network. Topics include related research, information needs, evolution of search topics, database selection, search strategies, relevance judgments, CD-ROM interfaces, and library instruction. (Author/LRW)

  18. Data-Based Decisions Guidelines for Teachers of Students with Severe Intellectual and Developmental Disabilities

    ERIC Educational Resources Information Center

    Jimenez, Bree A.; Mims, Pamela J.; Browder, Diane M.

    2012-01-01

    Effective practices in student data collection and implementation of data-based instructional decisions are needed for all educators, but are especially important when students have severe intellectual and developmental disabilities. Although research in the area of data-based instructional decisions for students with severe disabilities shows…

  19. Content-addressable holographic databases

    NASA Astrophysics Data System (ADS)

    Grawert, Felix; Kobras, Sebastian; Burr, Geoffrey W.; Coufal, Hans J.; Hanssen, Holger; Riedel, Marc; Jefferson, C. Michael; Jurich, Mark C.

    2000-11-01

    Holographic data storage allows the simultaneous search of an entire database by performing multiple optical correlations between stored data pages and a search argument. We have recently developed fuzzy encoding techniques for this fast parallel search and demonstrated a holographic data storage system that searches digital data records with high fidelity. This content-addressable retrieval is based on the ability to take the two-dimensional inner product between the search page and each stored data page. We show that this ability is lost when the correlator is defocussed to avoid material oversaturation, but can be regained by the combination of a random phase mask and beam confinement through total internal reflection. Finally, we propose an architecture in which spatially multiplexed holograms are distributed along the path of the search beam, allowing parallel search of large databases.

  20. The Majorana Parts Tracking Database

    SciTech Connect

    Abgrall, N.

    2015-01-16

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.

  1. The Majorana Parts Tracking Database

    DOE PAGES

    Abgrall, N.

    2015-01-16

    The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation.more » In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.« less

  2. Database applicaton for absolute spectrophotometry

    NASA Astrophysics Data System (ADS)

    Bochkov, Valery V.; Shumko, Sergiy

    2002-12-01

    32-bit database application with multidocument interface for Windows has been developed to calculate absolute energy distributions of observed spectra. The original database contains wavelength calibrated observed spectra which had been already passed through apparatus reductions such as flatfielding, background and apparatus noise subtracting. Absolute energy distributions of observed spectra are defined in unique scale by means of registering them simultaneously with artificial intensity standard. Observations of sequence of spectrophotometric standards are used to define absolute energy of the artificial standard. Observations of spectrophotometric standards are used to define optical extinction in selected moments. FFT algorithm implemented in the application allows performing convolution (deconvolution) spectra with user-defined PSF. The object-oriented interface has been created using facilities of C++ libraries. Client/server model with Windows Socket functionality based on TCP/IP protocol is used to develop the application. It supports Dynamic Data Exchange conversation in server mode and uses Microsoft Exchange communication facilities.

  3. Aero/fluids database system

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Violett, Duane L., Jr.

    1991-01-01

    The AFAS Database System was developed to provide the basic structure of a comprehensive database system for the Marshall Space Flight Center (MSFC) Structures and Dynamics Laboratory Aerophysics Division. The system is intended to handle all of the Aerophysics Division Test Facilities as well as data from other sources. The system was written for the DEC VAX family of computers in FORTRAN-77 and utilizes the VMS indexed file system and screen management routines. Various aspects of the system are covered, including a description of the user interface, lists of all code structure elements, descriptions of the file structures, a description of the security system operation, a detailed description of the data retrieval tasks, a description of the session log, and a description of the archival system.

  4. The National Land Cover Database

    USGS Publications Warehouse

    Homer, Collin H.; Fry, Joyce A.; Barnes, Christopher A.

    2012-01-01

    The National Land Cover Database (NLCD) serves as the definitive Landsat-based, 30-meter resolution, land cover database for the Nation. NLCD provides spatial reference and descriptive data for characteristics of the land surface such as thematic class (for example, urban, agriculture, and forest), percent impervious surface, and percent tree canopy cover. NLCD supports a wide variety of Federal, State, local, and nongovernmental applications that seek to assess ecosystem status and health, understand the spatial patterns of biodiversity, predict effects of climate change, and develop land management policy. NLCD products are created by the Multi-Resolution Land Characteristics (MRLC) Consortium, a partnership of Federal agencies led by the U.S. Geological Survey. All NLCD data products are available for download at no charge to the public from the MRLC Web site: http://www.mrlc.gov.

  5. Geologic Map Database of Texas

    USGS Publications Warehouse

    Stoeser, Douglas B.; Shock, Nancy; Green, Gregory N.; Dumonceaux, Gayle M.; Heran, William D.

    2005-01-01

    The purpose of this report is to release a digital geologic map database for the State of Texas. This database was compiled for the U.S. Geological Survey (USGS) Minerals Program, National Surveys and Analysis Project, whose goal is a nationwide assemblage of geologic, geochemical, geophysical, and other data. This release makes the geologic data from the Geologic Map of Texas available in digital format. Original clear film positives provided by the Texas Bureau of Economic Geology were photographically enlarged onto Mylar film. These films were scanned, georeferenced, digitized, and attributed by Geologic Data Systems (GDS), Inc., Denver, Colorado. Project oversight and quality control was the responsibility of the U.S. Geological Survey. ESRI ArcInfo coverages, AMLs, and shapefiles are provided.

  6. The KInetic Database for Astrochemistry

    NASA Astrophysics Data System (ADS)

    Wakelam, V.

    2010-12-01

    KIDA (for KInetic Database for Astrochemistry) is a project initiated by different communities in order to 1) improve the interaction between astrochemists and physico-chemists and 2) simplify the work of modeling the chemistry of astrophysical environments. Here astrophysical environments stand for the interstellar medium and planetary atmospheres. Both types of environments use similar chemical networks and the physico-chemists who work on the determination of reaction rate coefficients for both types of environment are the same.

  7. The Danish Bladder Cancer Database

    PubMed Central

    Hansen, Erik; Larsson, Heidi; Nørgaard, Mette; Thind, Peter; Jensen, Jørgen Bjerggaard

    2016-01-01

    Aim of database The aim of the Danish Bladder Cancer Database (DaBlaCa-data) is to monitor the treatment of all patients diagnosed with invasive bladder cancer (BC) in Denmark. Study population All patients diagnosed with BC in Denmark from 2012 onward were included in the study. Results presented in this paper are predominantly from the 2013 population. Main variables In 2013, 970 patients were diagnosed with BC in Denmark and were included in a preliminary report from the database. A total of 458 (47%) patients were diagnosed with non-muscle-invasive BC (non-MIBC) and 512 (53%) were diagnosed with muscle-invasive BC (MIBC). A total of 300 (31%) patients underwent cystectomy. Among the 135 patients diagnosed with MIBC, who were 75 years of age or younger, 67 (50%) received neoadjuvent chemotherapy prior to cystectomy. In 2013, a total of 147 patients were treated with curative-intended radiation therapy. Descriptive data One-year mortality was 28% (95% confidence interval [CI]: 15–21). One-year cancer-specific mortality was 25% (95% CI: 22–27%). One-year mortality after cystectomy was 14% (95% CI: 10–18). Ninety-day mortality after cystectomy was 3% (95% CI: 1–5) in 2013. One-year mortality following curative-intended radiation therapy was 32% (95% CI: 24–39) and 1-year cancer-specific mortality was 23% (95% CI: 16–31) in 2013. Conclusion This preliminary DaBlaCa-data report showed that the treatment of MIBC in Denmark overall meet high international academic standards. The database is able to identify Danish BC patients and monitor treatment and mortality. In the future, DaBlaCa-data will be a valuable data source and expansive observational studies on BC will be available. PMID:27822081

  8. The Danish Prostate Cancer Database

    PubMed Central

    Nguyen-Nielsen, Mary; Høyer, Søren; Friis, Søren; Hansen, Steinbjørn; Brasso, Klaus; Jakobsen, Erik Breth; Moe, Mette; Larsson, Heidi; Søgaard, Mette; Nakano, Anne; Borre, Michael

    2016-01-01

    Aim of database The Danish Prostate Cancer Database (DAPROCAdata) is a nationwide clinical cancer database that has prospectively collected data on patients with incident prostate cancer in Denmark since February 2010. The overall aim of the DAPROCAdata is to improve the quality of prostate cancer care in Denmark by systematically collecting key clinical variables for the purposes of health care monitoring, quality improvement, and research. Study population All Danish patients with histologically verified prostate cancer are included in the DAPROCAdata. Main variables The DAPROCAdata registers clinical data and selected characteristics for patients with prostate cancer at diagnosis. Data are collected from the linkage of nationwide health registries and supplemented with online registration of key clinical variables by treating physicians at urological and oncological departments. Main variables include Gleason scores, cancer staging, prostate-specific antigen values, and therapeutic measures (active surveillance, surgery, radiotherapy, endocrine therapy, and chemotherapy). Descriptive data In total, 22,332 patients with prostate cancer were registered in DAPROCAdata as of April 2015. A key feature of DAPROCAdata is the routine collection of patient-reported outcome measures (PROM), including data on quality-of-life (pain levels, physical activity, sexual function, depression, urine and fecal incontinence) and lifestyle factors (smoking, alcohol consumption, and body mass index). PROM data are derived from questionnaires distributed at diagnosis and at 1-year and 3-year follow-up. Hitherto, the PROM data have been limited by low completeness (26% among newly diagnosed patients in 2014). Conclusion DAPROCAdata is a comprehensive, yet still young clinical database. Efforts to improve data collection, data validity, and completeness are ongoing and of high priority. PMID:27843346

  9. FORMIDABEL: The Belgian Ants Database

    PubMed Central

    Brosens, Dimitri; Vankerkhoven, François; Ignace, David; Wegnez, Philippe; Noé, Nicolas; Heughebaert, André; Bortels, Jeannine; Dekoninck, Wouter

    2013-01-01

    Abstract FORMIDABEL is a database of Belgian Ants containing more than 27.000 occurrence records. These records originate from collections, field sampling and literature. The database gives information on 76 native and 9 introduced ant species found in Belgium. The collection records originated mainly from the ants collection in Royal Belgian Institute of Natural Sciences (RBINS), the ‘Gaspar’ Ants collection in Gembloux and the zoological collection of the University of Liège (ULG). The oldest occurrences date back from May 1866, the most recent refer to August 2012. FORMIDABEL is a work in progress and the database is updated twice a year. The latest version of the dataset is publicly and freely accessible through this url: http://ipt.biodiversity.be/resource.do?r=formidabel. The dataset is also retrievable via the GBIF data portal through this link: http://data.gbif.org/datasets/resource/14697 A dedicated geo-portal, developed by the Belgian Biodiversity Platform is accessible at: http://www.formicidae-atlas.be Purpose: FORMIDABEL is a joint cooperation of the Flemish ants working group “Polyergus” (http://formicidae.be) and the Wallonian ants working group “FourmisWalBru” (http://fourmiswalbru.be). The original database was created in 2002 in the context of the preliminary red data book of Flemish Ants (Dekoninck et al. 2003). Later, in 2005, data from the Southern part of Belgium; Wallonia and Brussels were added. In 2012 this dataset was again updated for the creation of the first Belgian Ants Atlas (Figure 1) (Dekoninck et al. 2012). The main purpose of this atlas was to generate maps for all outdoor-living ant species in Belgium using an overlay of the standard Belgian ecoregions. By using this overlay for most species, we can discern a clear and often restricted distribution pattern in Belgium, mainly based on vegetation and soil types. PMID:23794918

  10. The RECONS 25 Parsec Database

    NASA Astrophysics Data System (ADS)

    Henry, Todd J.; Jao, Wei-Chun; Pewett, Tiffany; Riedel, Adric R.; Silverstein, Michele L.; Slatten, Kenneth J.; Winters, Jennifer G.; Recons Team

    2015-01-01

    The REsearch Consortium On Nearby Stars (RECONS, www.recons.org) Team has been mapping the solar neighborhood since 1994. Nearby stars provide the fundamental framework upon which all of stellar astronomy is based, both for individual stars and stellar populations. The nearest stars are also the primary targets for extrasolar planet searches, and will undoubtedly play key roles in understanding the prevalence and structure of solar systems, and ultimately, in our search for life elsewhere.We have built the RECONS 25 Parsec Database to encourage and enable exploration of the Sun's nearest neighbors. The Database, slated for public release in 2015, contains 3088 stars, brown dwarfs, andexoplanets in 2184 systems as of October 1, 2014. All of these systems have accurate trigonometric parallaxes in the refereed literature placing them closer than 25.0 parsecs, i.e., parallaxes greater than 40 mas with errors less than 10 mas. Carefully vetted astrometric, photometric, and spectroscopic data are incorporated intothe Database from reliable sources, including significant original data collected by members of the RECONS Team.Current exploration of the solar neighborhood by RECONS, enabled by the Database, focuses on the ubiquitous red dwarfs, including: assessing the stellar companion population of ~1200 red dwarfs (Winters), investigating the astrophysical causes that spread red dwarfs of similar temperatures by a factor of 16 in luminosity (Pewett), and canvassing ~3000 red dwarfs for excess emission due to unseen companions and dust (Silverstein). In addition, a decade long astrometric survey of ~500 red dwarfs in the southern sky has begun, in an effort to understand the stellar, brown dwarf, and planetary companion populations for the stars that make up at least 75% of all stars in the Universe.This effort has been supported by the NSF through grants AST-0908402, AST-1109445, and AST-1412026, and via observations made possible by the SMARTS Consortium.

  11. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  12. A database for propagation models

    NASA Technical Reports Server (NTRS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Choung

    1994-01-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self-explanatory; however, a sample run of the CCIR rain attenuation model is presented.

  13. A database for propagation models

    NASA Astrophysics Data System (ADS)

    Kantak, Anil V.; Suwitra, Krisjani; Le, Chuong

    1995-08-01

    A database of various propagation phenomena models that can be used by telecommunications systems engineers to obtain parameter values for systems design is presented. This is an easy-to-use tool and is currently available for either a PC using Excel software under Windows environment or a Macintosh using Excel software for Macintosh. All the steps necessary to use the software are easy and many times self explanatory.

  14. GOLD: The Genomes Online Database

    DOE Data Explorer

    Kyrpides, Nikos; Liolios, Dinos; Chen, Amy; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor; Bernal, Alex

    Since its inception in 1997, GOLD has continuously monitored genome sequencing projects worldwide and has provided the community with a unique centralized resource that integrates diverse information related to Archaea, Bacteria, Eukaryotic and more recently Metagenomic sequencing projects. As of September 2007, GOLD recorded 639 completed genome projects. These projects have their complete sequence deposited into the public archival sequence databases such as GenBank EMBL,and DDBJ. From the total of 639 complete and published genome projects as of 9/2007, 527 were bacterial, 47 were archaeal and 65 were eukaryotic. In addition to the complete projects, there were 2158 ongoing sequencing projects. 1328 of those were bacterial, 59 archaeal and 771 eukaryotic projects. Two types of metadata are provided by GOLD: (i) project metadata and (ii) organism/environment metadata. GOLD CARD pages for every project are available from the link of every GOLD_STAMP ID. The information in every one of these pages is organized into three tables: (a) Organism information, (b) Genome project information and (c) External links. [The Genomes On Line Database (GOLD) in 2007: Status of genomic and metagenomic projects and their associated metadata, Konstantinos Liolios, Konstantinos Mavromatis, Nektarios Tavernarakis and Nikos C. Kyrpides, Nucleic Acids Research Advance Access published online on November 2, 2007, Nucleic Acids Research, doi:10.1093/nar/gkm884]

    The basic tables in the GOLD database that can be browsed or searched include the following information:

    • Gold Stamp ID
    • Organism name
    • Domain
    • Links to information sources
    • Size and link to a map, when available
    • Chromosome number, Plas number, and GC content
    • A link for downloading the actual genome data
    • Institution that did the sequencing
    • Funding source
    • Database where information resides
    • Publication status and information

    • ARL’S Acoustic Database

      DTIC Science & Technology

      1999-01-01

      1 Heng Tan, Visual Basic /SQL Server Primer, ETN Press, Montoursville, PA (1995), p. 46. Removable Hard Drives 500-CD...the user and the server, the Acoustic Signal Processing Branch has developed an innovative user-friendly Visual Basic front-end program. Residing on...Microsoft Visual Basic , Professional Edition, Version 6.0 was selected as the development language for the database user-interface program

    • Central Asia Active Fault Database

      NASA Astrophysics Data System (ADS)

      Mohadjer, Solmaz; Ehlers, Todd A.; Kakar, Najibullah

      2014-05-01

      The ongoing collision of the Indian subcontinent with Asia controls active tectonics and seismicity in Central Asia. This motion is accommodated by faults that have historically caused devastating earthquakes and continue to pose serious threats to the population at risk. Despite international and regional efforts to assess seismic hazards in Central Asia, little attention has been given to development of a comprehensive database for active faults in the region. To address this issue and to better understand the distribution and level of seismic hazard in Central Asia, we are developing a publically available database for active faults of Central Asia (including but not limited to Afghanistan, Tajikistan, Kyrgyzstan, northern Pakistan and western China) using ArcGIS. The database is designed to allow users to store, map and query important fault parameters such as fault location, displacement history, rate of movement, and other data relevant to seismic hazard studies including fault trench locations, geochronology constraints, and seismic studies. Data sources integrated into the database include previously published maps and scientific investigations as well as strain rate measurements and historic and recent seismicity. In addition, high resolution Quickbird, Spot, and Aster imagery are used for selected features to locate and measure offset of landforms associated with Quaternary faulting. These features are individually digitized and linked to attribute tables that provide a description for each feature. Preliminary observations include inconsistent and sometimes inaccurate information for faults documented in different studies. For example, the Darvaz-Karakul fault which roughly defines the western margin of the Pamir, has been mapped with differences in location of up to 12 kilometers. The sense of motion for this fault ranges from unknown to thrust and strike-slip in three different studies despite documented left-lateral displacements of Holocene and late

    • IDBD: infectious disease biomarker database.

      PubMed

      Yang, In Seok; Ryu, Chunsun; Cho, Ki Joon; Kim, Jin Kwang; Ong, Swee Hoe; Mitchell, Wayne P; Kim, Bong Su; Oh, Hee-Bok; Kim, Kyung Hyun

      2008-01-01

      Biomarkers enable early diagnosis, guide molecularly targeted therapy and monitor the activity and therapeutic responses across a variety of diseases. Despite intensified interest and research, however, the overall rate of development of novel biomarkers has been falling. Moreover, no solution is yet available that efficiently retrieves and processes biomarker information pertaining to infectious diseases. Infectious Disease Biomarker Database (IDBD) is one of the first efforts to build an easily accessible and comprehensive literature-derived database covering known infectious disease biomarkers. IDBD is a community annotation database, utilizing collaborative Web 2.0 features, providing a convenient user interface to input and revise data online. It allows users to link infectious diseases or pathogens to protein, gene or carbohydrate biomarkers through the use of search tools. It supports various types of data searches and application tools to analyze sequence and structure features of potential and validated biomarkers. Currently, IDBD integrates 611 biomarkers for 66 infectious diseases and 70 pathogens. It is publicly accessible at http://biomarker.cdc.go.kr and http://biomarker.korea.ac.kr.

    • Databases and tools in glycobiology.

      PubMed

      Artemenko, Natalia V; McDonald, Andrew G; Davey, Gavin P; Rudd, Pauline M

      2012-01-01

      Glycans are crucial to the functioning of multicellular organisms. They may also play a role as mediators between host and parasite or symbiont. As many proteins (>50%) are posttranslationally modified by glycosylation, this mechanism is considered to be the most widespread posttranslational modification in eukaryotes. These surface modifications alter and regulate structure and biological activities/functions of proteins/biomolecules as they are largely involved in the recognition process of the appropriate structure in order to bind to the target cells. Consequently, the recognition of glycans on cellular surfaces plays a crucial role in the promotion or inhibition of various diseases and, therefore, glycosylation itself is considered to be a critical protein quality control attribute for commercial therapeutics, which is one of the fastest growing segments in the pharmaceutical industry. With the development of glycobiology as a separate discipline, a number of databases and tools became available in a similar way to other well-established "omics." Alleviating the recognized shortcomings of the available tools for data storage and retrieval is one of the highest priorities of the international glycoinformatics community. In the last decade, major efforts have been made, by leading scientific groups, towards the integration of a number of major databases and tools into a single portal, which would act as a centralized data repository for glycomics, equipped with a number of comprehensive analytical tools for data systematization, analysis, and comparison. This chapter provides an overview of the most important carbohydrate-related databases and glycoinformatic tools.

    • Italian Rett database and biobank.

      PubMed

      Sampieri, Katia; Meloni, Ilaria; Scala, Elisa; Ariani, Francesca; Caselli, Rossella; Pescucci, Chiara; Longo, Ilaria; Artuso, Rosangela; Bruttini, Mirella; Mencarelli, Maria Antonietta; Speciale, Caterina; Causarano, Vincenza; Hayek, Giuseppe; Zappella, Michele; Renieri, Alessandra; Mari, Francesca

      2007-04-01

      Rett syndrome is the second most common cause of severe mental retardation in females, with an incidence of approximately 1 out of 10,000 live female births. In addition to the classic form, a number of Rett variants have been described. MECP2 gene mutations are responsible for about 90% of classic cases and for a lower percentage of variant cases. Recently, CDKL5 mutations have been identified in the early onset seizures variant and other atypical Rett patients. While the high percentage of MECP2 mutations in classic patients supports the hypothesis of a single disease gene, the low frequency of mutated variant cases suggests genetic heterogeneity. Since 1998, we have performed clinical evaluation and molecular analysis of a large number of Italian Rett patients. The Italian Rett Syndrome (RTT) database has been developed to share data and samples of our RTT collection with the scientific community (http://www.biobank.unisi.it). This is the first RTT database that has been connected with a biobank. It allows the user to immediately visualize the list of available RTT samples and, using the "Search by" tool, to rapidly select those with specific clinical and molecular features. By contacting bank curators, users can request the samples of interest for their studies. This database encourages collaboration projects with clinicians and researchers from around the world and provides important resources that will help to better define the pathogenic mechanisms underlying Rett syndrome.

    • A veterinary digital anatomical database.

      PubMed Central

      Snell, J. R.; Green, R.; Stott, G.; Van Baerle, S.

      1991-01-01

      This paper describes the Veterinary Digital Anatomical Database Project. The purpose of the project is to investigate the construction and use of digitally stored anatomical models. We will be discussing the overall project goals and the results to date. Digital anatomical models are 3 dimensional, solid model representations of normal anatomy. The digital representations are electronically stored and can be manipulated and displayed on a computer graphics workstation. A digital database of anatomical structures can be used in conjunction with gross dissection in teaching normal anatomy to first year students in the professional curriculum. The computer model gives students the opportunity to "discover" relationships between anatomical structures that may have been destroyed or may not be obvious in the gross dissection. By using a digital database, the student will have the ability to view and manipulate anatomical structures in ways that are not available through interactive video disk (IVD). IVD constrains the student to preselected views and sections stored on the disk. Images Figure 1 PMID:1807707

  1. Guideline.gov: A Database of Clinical Specialty Guidelines.

    PubMed

    El-Khayat, Yamila M; Forbes, Carrie S; Coghill, Jeffrey G

    2017-01-01

    The National Guidelines Clearinghouse (NGC), also known as Guideline.gov, is a database of resources to assist health care providers with a central depository of guidelines for clinical specialty areas in medicine. The database is provided free of charge and is sponsored by the U.S. Department of Health and Human Services and the Agency for Healthcare Research and Quality. The guidelines for treatment are updated regularly, with new guidelines replacing older guidelines every five years. There are hundreds of current guidelines with more added each week. The purpose and goal of NGC is to provide physicians, nurses, and other health care providers, insurance companies, and others in the field of health care with a unified database of the most current, detailed, relevant, and objective clinical practice guidelines.

  2. Guidelines for good database selection and use in pharmacoepidemiology research.

    PubMed

    Hall, Gillian C; Sauer, Brian; Bourke, Alison; Brown, Jeffrey S; Reynolds, Matthew W; LoCasale, Robert; Casale, Robert Lo

    2012-01-01

    The use of healthcare databases in research provides advantages such as increased speed, lower costs and limitation of some biases. However, database research has its own challenges as studies must be performed within the limitations of resources, which often are the product of complex healthcare systems. The primary purpose of this document is to assist in the selection and use of data resources in pharmacoepidemiology, highlighting potential limitations and recommending tested procedures. This guidance is presented as a detailed text with a checklist for quick reference and covers six areas: selection of a database, use of multiple data resources, extraction and analysis of the study population, privacy and security, quality and validation procedures and documentation.

  3. LinkProt: a database collecting information about biological links

    PubMed Central

    Dabrowski-Tumanski, Pawel; Jarmolinska, Aleksandra I.; Niemyska, Wanda; Rawdon, Eric J.; Millett, Kenneth C.; Sulkowska, Joanna I.

    2017-01-01

    Protein chains are known to fold into topologically complex shapes, such as knots, slipknots or complex lassos. This complex topology of the chain can be considered as an additional feature of a protein, separate from secondary and tertiary structures. Moreover, the complex topology can be defined also as one additional structural level. The LinkProt database (http://linkprot.cent.uw.edu.pl) collects and displays information about protein links — topologically non-trivial structures made by up to four chains and complexes of chains (e.g. in capsids). The database presents deterministic links (with loops closed, e.g. by two disulfide bonds), links formed probabilistically and macromolecular links. The structures are classified according to their topology and presented using the minimal surface area method. The database is also equipped with basic tools which allow users to analyze the topology of arbitrary (bio)polymers. PMID:27794552

  4. Development and validation of a Database Forensic Metamodel (DBFM)

    PubMed Central

    Al-dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem

    2017-01-01

    Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent. PMID:28146585

  5. Karst database development in Minnesota: Design and data assembly

    USGS Publications Warehouse

    Gao, Y.; Alexander, E.C.; Tipping, R.G.

    2005-01-01

    The Karst Feature Database (KFD) of Minnesota is a relational GIS-based Database Management System (DBMS). Previous karst feature datasets used inconsistent attributes to describe karst features in different areas of Minnesota. Existing metadata were modified and standardized to represent a comprehensive metadata for all the karst features in Minnesota. Microsoft Access 2000 and ArcView 3.2 were used to develop this working database. Existing county and sub-county karst feature datasets have been assembled into the KFD, which is capable of visualizing and analyzing the entire data set. By November 17 2002, 11,682 karst features were stored in the KFD of Minnesota. Data tables are stored in a Microsoft Access 2000 DBMS and linked to corresponding ArcView applications. The current KFD of Minnesota has been moved from a Windows NT server to a Windows 2000 Citrix server accessible to researchers and planners through networked interfaces. ?? Springer-Verlag 2005.

  6. Development and validation of a Database Forensic Metamodel (DBFM).

    PubMed

    Al-Dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem

    2017-01-01

    Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent.

  7. Reef Ecosystem Services and Decision Support Database

    EPA Science Inventory

    This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...

  8. Diet History Questionnaire: Database Revision History

    Cancer.gov

    The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

  9. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  10. Integrated Primary Care Information Database (IPCI)

    Cancer.gov

    The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.

  11. High-Performance Secure Database Access Technologies for HEP Grids

    SciTech Connect

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  12. The Instrumentation of the Multibackend Database System

    DTIC Science & Technology

    1993-06-10

    COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number) FIELD IGROUP SUB-GROUP Parallel Database, Multilingual ...identify by block number) Most database system designs and implementations are limited to single language ( monolingual ) and single model (mono- model...solution to the processing cost and data sharing problems of hetero- geneous database systems. One solution is a multimodel and multilingual database

  13. An Internet enabled impact limiter material database

    SciTech Connect

    Wix, S.; Kanipe, F.; McMurtry, W.

    1998-09-01

    This paper presents a detailed explanation of the construction of an interest enabled database, also known as a database driven web site. The data contained in the internet enabled database are impact limiter material and seal properties. The technique used in constructing the internet enabled database presented in this paper are applicable when information that is changing in content needs to be disseminated to a wide audience.

  14. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  15. Speech Databases of Typical Children and Children with SLI

    PubMed Central

    Grill, Pavel; Tučková, Jana

    2016-01-01

    The extent of research on children’s speech in general and on disordered speech specifically is very limited. In this article, we describe the process of creating databases of children’s speech and the possibilities for using such databases, which have been created by the LANNA research group in the Faculty of Electrical Engineering at Czech Technical University in Prague. These databases have been principally compiled for medical research but also for use in other areas, such as linguistics. Two databases were recorded: one for healthy children’s speech (recorded in kindergarten and in the first level of elementary school) and the other for pathological speech of children with a Specific Language Impairment (recorded at a surgery of speech and language therapists and at the hospital). Both databases were sub-divided according to specific demands of medical research. Their utilization can be exoteric, specifically for linguistic research and pedagogical use as well as for studies of speech-signal processing. PMID:26963508

  16. Copyright Registration for Automated Databases. Circular 65.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. Copyright Office.

    This description of the copyright protection available for automated databases provides a definition of an automated database; discusses the extent of copyright protection, i.e., the compilation of facts; explains copyright registration and what constitutes publication of a database; and describes the procedures for registering both published and…

  17. Annual Review of Database Developments 1991.

    ERIC Educational Resources Information Center

    Basch, Reva

    1991-01-01

    Review of developments in databases highlights a new emphasis on accessibility. Topics discussed include the internationalization of databases; databases that deal with finance, drugs, and toxic waste; access to public records, both personal and corporate; media online; reducing large files of data to smaller, more manageable files; and…

  18. Conceptual Design of a Prototype LSST Database

    SciTech Connect

    Nikolaev, S; Huber, M E; Cook, K H; Abdulla, G; Brase, J

    2004-10-07

    This document describes a preliminary design for Prototype LSST Database (LSST DB). They identify key components and data structures and provide an expandable conceptual schema for the database. The authors discuss the potential user applications and post-processing algorithm to interact with the database, and give a set of example queries.

  19. Full-Text Databases in Medicine.

    ERIC Educational Resources Information Center

    Sievert, MaryEllen C.; And Others

    1995-01-01

    Describes types of full-text databases in medicine; discusses features for searching full-text journal databases available through online vendors; reviews research on full-text databases in medicine; and describes the MEDLINE/Full-Text Research Project at the University of Missouri (Columbia) which investigated precision, recall, and relevancy.…

  20. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  1. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  2. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  3. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  4. 6 CFR 37.33 - DMV databases.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false DMV databases. 37.33 Section 37.33 Domestic... IDENTIFICATION CARDS Other Requirements § 37.33 DMV databases. (a) States must maintain a State motor vehicle database that contains, at a minimum— (1) All data fields printed on driver's licenses and...

  5. Emission Database for Global Atmospheric Research (EDGAR).

    ERIC Educational Resources Information Center

    Olivier, J. G. J.; And Others

    1994-01-01

    Presents the objective and methodology chosen for the construction of a global emissions source database called EDGAR and the structural design of the database system. The database estimates on a regional and grid basis, 1990 annual emissions of greenhouse gases, and of ozone depleting compounds from all known sources. (LZ)

  6. Information Literacy Skills: Comparing and Evaluating Databases

    ERIC Educational Resources Information Center

    Grismore, Brian A.

    2012-01-01

    The purpose of this database comparison is to express the importance of teaching information literacy skills and to apply those skills to commonly used Internet-based research tools. This paper includes a comparison and evaluation of three databases (ProQuest, ERIC, and Google Scholar). It includes strengths and weaknesses of each database based…

  7. Administrators Say Funding Inhibits Use of Databases.

    ERIC Educational Resources Information Center

    Gerhard, Michael E.

    1990-01-01

    Surveys journalism and mass communication department heads to address questions related to the use of online databases in journalism higher education, database policy, resources used in providing online services, and satisfaction with database service. Reports that electronic information retrieval is just beginning to penetrate journalism at the…

  8. Database Systems. Course Three. Information Systems Curriculum.

    ERIC Educational Resources Information Center

    O'Neil, Sharon Lund; Everett, Donna R.

    This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…

  9. Differences between Beilstein and the CAS Databases.

    ERIC Educational Resources Information Center

    Heller, Stephen R.

    1987-01-01

    Describes the different approaches taken by the Chemical Abstracts Services database, which abstracts and indexes chemical publications, and the Belstein Handbook of Organic Chemistry database, which produces a collection of critical reviews. The resulting content of the databases and their ability to meet the needs of different users are…

  10. PACSY, a relational database management system for protein structure and chemical shift analysis.

    PubMed

    Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L

    2012-10-01

    PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.

  11. Enhancing the DNA Patent Database

    SciTech Connect

    Walters, LeRoy B.

    2008-02-18

    Final Report on Award No. DE-FG0201ER63171 Principal Investigator: LeRoy B. Walters February 18, 2008 This project successfully completed its goal of surveying and reporting on the DNA patenting and licensing policies at 30 major U.S. academic institutions. The report of survey results was published in the January 2006 issue of Nature Biotechnology under the title “The Licensing of DNA Patents by US Academic Institutions: An Empirical Survey.” Lori Pressman was the lead author on this feature article. A PDF reprint of the article will be submitted to our Program Officer under separate cover. The project team has continued to update the DNA Patent Database on a weekly basis since the conclusion of the project. The database can be accessed at dnapatents.georgetown.edu. This database provides a valuable research tool for academic researchers, policymakers, and citizens. A report entitled Reaping the Benefits of Genomic and Proteomic Research: Intellectual Property Rights, Innovation, and Public Health was published in 2006 by the Committee on Intellectual Property Rights in Genomic and Protein Research and Innovation, Board on Science, Technology, and Economic Policy at the National Academies. The report was edited by Stephen A. Merrill and Anne-Marie Mazza. This report employed and then adapted the methodology developed by our research project and quoted our findings at several points. (The full report can be viewed online at the following URL: http://www.nap.edu/openbook.php?record_id=11487&page=R1). My colleagues and I are grateful for the research support of the ELSI program at the U.S. Department of Energy.

  12. Interconnecting heterogeneous database management systems

    NASA Technical Reports Server (NTRS)

    Gligor, V. D.; Luckenbaugh, G. L.

    1984-01-01

    It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.

  13. The GDB Human Genome Database Anno 1997.

    PubMed Central

    Fasman, K H; Letovsky, S I; Li, P; Cottingham, R W; Kingsbury, D T

    1997-01-01

    The value of the Genome Database (GDB) for the human genome research community has been greatly increased since the release of version 6. 0 last year. Thanks to the introduction of significant technical improvements, GDB has seen dramatic growth in the type and volume of information stored in the database. This article summarizes the types of data that are now available in the Genome Database, demonstrates how the database is interconnected with other biomedical resources on the World Wide Web, discusses how researchers can contribute new or updated information to the database, and describes our current efforts as well as planned improvements for the future. PMID:9016507

  14. Database specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB)

    SciTech Connect

    Faby, E.Z.; Fluker, J.; Hancock, B.R.; Grubb, J.W.; Russell, D.L.; Loftis, J.P.; Shipe, P.C.; Truett, L.F.

    1994-03-01

    This Database Specification for the Worldwide Port System (WPS) Regional Integrated Cargo Database (ICDB) describes the database organization and storage allocation, provides the detailed data model of the logical and physical designs, and provides information for the construction of parts of the database such as tables, data elements, and associated dictionaries and diagrams.

  15. Teaching Case: Adapting the Access Northwind Database to Support a Database Course

    ERIC Educational Resources Information Center

    Dyer, John N.; Rogers, Camille

    2015-01-01

    A common problem encountered when teaching database courses is that few large illustrative databases exist to support teaching and learning. Most database textbooks have small "toy" databases that are chapter objective specific, and thus do not support application over the complete domain of design, implementation and management concepts…

  16. Geologic map and map database of the Palo Alto 30' x 60' quadrangle, California

    USGS Publications Warehouse

    Brabb, E.E.; Jones, D.L.; Graymer, R.W.

    2000-01-01

    This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (pamf.ps, pamf.pdf, pamf.txt), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.

  17. Reinventing the National Topographic Database

    NASA Astrophysics Data System (ADS)

    Jakobsson, A.; Ilves, R.

    2016-06-01

    The National Land Survey (NLS) has had a digital topographic database (TDB) since 1992. Many of its features are based on the Basic Map created by M. Kajamaa in 1947, mapping first completed in 1977. The basis for the renewal of the TDB begun by investigating the value of the TDB, a study made by the Aalto University in 2014 and a study on the new TDB system 2030 published by the Ministry of Agriculture in 2015. As a result of these studies the NLS set up a programme for creating a new National Topographic Database (NTDB) in beginning of 2015. First new version should be available in 2019. The new NTDB has following key features: 1) it is based on processes where data is naturally maintained, 2) it is quality managed, 3) it has persistent Ids, 4) it supports 3D, 4D, 5) it is based on standards. The technical architecture is based on interoperable modules. A website for following the development of the NTDB can be accessed for more information: http://kmtk.maanmittauslaitos.fi/.

  18. Cooperative answers in database systems

    NASA Technical Reports Server (NTRS)

    Gaasterland, Terry; Godfrey, Parke; Minker, Jack; Novik, Lev

    1993-01-01

    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system.

  19. The age-phenome database.

    PubMed

    Geifman, Nophar; Rubin, Eitan

    2012-01-01

    Data linking specific ages or age ranges with disease are abundant in biomedical literature. However, these data are organized such that searching for age-phenotype relationships is difficult. Recently, we described the Age-Phenome Knowledge-base (APK), a computational platform for storage and retrieval of information concerning age-related phenotypic patterns. Here, we report that data derived from over 1.5 million human-related PubMed abstracts have been added to APK. Using a text-mining pipeline, 35,683 entries which describe relationships between age and phenotype (such as disease) have been introduced into the database. Comparing the results to those obtained by a human reader reveals that the overall accuracy of these entries is estimated to exceed 80%. The usefulness of these data for obtaining new insight regarding age-disease relationships is demonstrated using clustering analysis, which is shown to capture obvious, as well as potentially interesting relationships between diseases. In addition, a new tool for browsing and searching the APK database is presented. We thus present a unique resource and a new framework for studying age-disease relationships and other phenotypic processes.

  20. MPW : the metabolic pathways database.

    SciTech Connect

    Selkov, E., Jr.; Grechkin, Y.; Mikhailova, N.; Selkov, E.; Mathematics and Computer Science; Russian Academy of Sciences

    1998-01-01

    The Metabolic Pathways Database (MPW) (www.biobase.com/emphome.html/homepage. html.pags/pathways.html) a derivative of EMP (www.biobase.com/EMP) plays a fundamental role in the technology of metabolic reconstructions from sequenced genomes under the PUMA (www.mcs.anl.gov/home/compbio/PUMA/Production/ ReconstructedMetabolism/reconstruction.html), WIT (www.mcs.anl.gov/home/compbio/WIT/wit.html ) and WIT2 (beauty.isdn.msc.anl.gov/WIT2.pub/CGI/user.cgi) systems. In October 1997, it included some 2800 pathway diagrams covering primary and secondary metabolism, membrane transport, signal transduction pathways, intracellular traffic, translation and transcription. In the current public release of MPW (beauty.isdn.mcs.anl.gov/MPW), the encoding is based on the logical structure of the pathways and is represented by the objects commonly used in electronic circuit design. This facilitates drawing and editing the diagrams and makes possible automation of the basic simulation operations such as deriving stoichiometric matrices, rate laws, and, ultimately, dynamic models of metabolic pathways. Individual pathway diagrams, automatically derived from the original ASCII records, are stored as SGML instances supplemented by relational indices. An auxiliary database of compound names and structures, encoded in the SMILES format, is maintained to unambiguously connect the pathways to the chemical structures of their intermediates.

  1. PDS: A Performance Database Server

    DOE PAGES

    Berry, Michael W.; Dongarra, Jack J.; Larose, Brian H.; ...

    1994-01-01

    The process of gathering, archiving, and distributing computer benchmark data is a cumbersome task usually performed by computer users and vendors with little coordination. Most important, there is no publicly available central depository of performance data for all ranges of machines from personal computers to supercomputers. We present an Internet-accessible performance database server (PDS) that can be used to extract current benchmark data and literature. As an extension to the X-Windows-based user interface (Xnetlib) to the Netlib archival system, PDS provides an on-line catalog of public domain computer benchmarks such as the LINPACK benchmark, Perfect benchmarks, and the NAS parallelmore » benchmarks. PDS does not reformat or present the benchmark data in any way that conflicts with the original methodology of any particular benchmark; it is thereby devoid of any subjective interpretations of machine performance. We believe that all branches (research laboratories, academia, and industry) of the general computing community can use this facility to archive performance metrics and make them readily available to the public. PDS can provide a more manageable approach to the development and support of a large dynamic database of published performance metrics.« less

  2. The Gene Expression Omnibus Database.

    PubMed

    Clough, Emily; Barrett, Tanya

    2016-01-01

    The Gene Expression Omnibus (GEO) database is an international public repository that archives and freely distributes high-throughput gene expression and other functional genomics data sets. Created in 2000 as a worldwide resource for gene expression studies, GEO has evolved with rapidly changing technologies and now accepts high-throughput data for many other data applications, including those that examine genome methylation, chromatin structure, and genome-protein interactions. GEO supports community-derived reporting standards that specify provision of several critical study elements including raw data, processed data, and descriptive metadata. The database not only provides access to data for tens of thousands of studies, but also offers various Web-based tools and strategies that enable users to locate data relevant to their specific interests, as well as to visualize and analyze the data. This chapter includes detailed descriptions of methods to query and download GEO data and use the analysis and visualization tools. The GEO homepage is at http://www.ncbi.nlm.nih.gov/geo/.

  3. The MetaCyc Database.

    PubMed

    Karp, Peter D; Riley, Monica; Paley, Suzanne M; Pellegrini-Toole, Alida

    2002-01-01

    MetaCyc is a metabolic-pathway database that describes 445 pathways and 1115 enzymes occurring in 158 organisms. MetaCyc is a review-level database in that a given entry in MetaCyc often integrates information from multiple literature sources. The pathways in MetaCyc were determined experimentally, and are labeled with the species in which they are known to occur based on literature references examined to date. MetaCyc contains extensive commentary and literature citations. Applications of MetaCyc include pathway analysis of genomes, metabolic engineering and biochemistry education. MetaCyc is queried using the Pathway Tools graphical user interface, which provides a wide variety of query operations and visualization tools. MetaCyc is available via the World Wide Web at http://ecocyc.org/ecocyc/metacyc.html, and is available for local installation as a binary program for the PC and the Sun workstation, and as a set of flatfiles. Contact metacyc-info@ai.sri.com for information on obtaining a local copy of MetaCyc.

  4. The National Geochemical Survey; database and documentation

    USGS Publications Warehouse

    ,

    2004-01-01

    The USGS, in collaboration with other federal and state government agencies, industry, and academia, is conducting the National Geochemical Survey (NGS) to produce a body of geochemical data for the United States based primarily on stream sediments, analyzed using a consistent set of methods. These data will compose a complete, national-scale geochemical coverage of the US, and will enable construction of geochemical maps, refine estimates of baseline concentrations of chemical elements in the sampled media, and provide context for a wide variety of studies in the geological and environmental sciences. The goal of the NGS is to analyze at least one stream-sediment sample in every 289 km2 area by a single set of analytical methods across the entire nation, with other solid sample media substituted where necessary. The NGS incorporates geochemical data from a variety of sources, including existing analyses in USGS databases, reanalyses of samples in USGS archives, and analyses of newly collected samples. At the present time, the NGS includes data covering ~71% of the land area of the US, including samples in all 50 states. This version of the online report provides complete access to NGS data, describes the history of the project, the methodology used, and presents preliminary geochemical maps for all analyzed elements. Future editions of this and other related reports will include the results of analysis of variance studies, as well as interpretive products related to the NGS data.

  5. Overview of selected molecular biological databases

    SciTech Connect

    Rayl, K.D.; Gaasterland, T.

    1994-11-01

    This paper presents an overview of the purpose, content, and design of a subset of the currently available biological databases, with an emphasis on protein databases. Databases included in this summary are 3D-ALI, Berlin RNA databank, Blocks, DSSP, EMBL Nucleotide Database, EMP, ENZYME, FSSP, GDB, GenBank, HSSP, LiMB, PDB, PIR, PKCDD, ProSite, and SWISS-PROT. The goal is to provide a starting point for researchers who wish to take advantage of the myriad available databases. Rather than providing a complete explanation of each database, we present its content and form by explaining the details of typical entries. Pointers to more complete ``user guides`` are included, along with general information on where to search for a new database.

  6. [DICOM data conversion technology research for database].

    PubMed

    Wang, Shiyu; Lin, Hao

    2010-12-01

    A comprehensive medical image platform built for medical images network access, measurements and virtual surgery navigation needs the support of medical image databases. The medical image database we built contains two-dimensional images and three-dimensional models. The common databases based on DICOM storing do not meet the requirements. We use the technology of DICOM conversion to convert DICOM into BMP images and indispensable data elements, and then we use the BMP images and indispensable data elements to reconstruct the three-dimensional model. The reliability of DICOM data conversion is verified, and on this basis, a human hip joint medical image database is built. Experimental results show that this method used in building the medical image database can not only meet the requirements of database application, but also greatly reduce the amount of database storage.

  7. Interactive Database of Pulsar Flux Density Measurements

    NASA Astrophysics Data System (ADS)

    Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.

    2012-12-01

    The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.

  8. Database interfaces on NASA's heterogeneous distributed database system

    NASA Technical Reports Server (NTRS)

    Huang, Shou-Hsuan Stephen

    1987-01-01

    The purpose of Distributed Access View Integrated Database (DAVID) interface module (Module 9: Resident Primitive Processing Package) is to provide data transfer between local DAVID systems and resident Data Base Management Systems (DBMSs). The result of current research is summarized. A detailed description of the interface module is provided. Several Pascal templates were constructed. The Resident Processor program was also developed. Even though it is designed for the Pascal templates, it can be modified for templates in other languages, such as C, without much difficulty. The Resident Processor itself can be written in any programming language. Since Module 5 routines are not ready yet, there is no way to test the interface module. However, simulation shows that the data base access programs produced by the Resident Processor do work according to the specifications.

  9. Initiation of a Database of CEUS Ground Motions for NGA East

    NASA Astrophysics Data System (ADS)

    Cramer, C. H.

    2007-12-01

    The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.

  10. Hydromorphological Datamanagement - From Fieldwork to Database

    NASA Astrophysics Data System (ADS)

    Stadler, Philipp; Steinwendner, Norbert; Prüller, Stefan; Millauer, Isabell; Pröll, Elmar

    2010-05-01

    Since 2008 a hydromorphological survey and mapping of semi natural brooks is done at the National Park Kalkalpen in Upper Austria. In addition to the water-documentation programme running at the Nationalpark Kalkalpen there is the request to classify the hydromorphological situation (especially level of anthropogenic interaction and grade of renaturation) of small and midsize semi natural brooks. The system of mapping which was developed during the pilot mapping in 2008 realigns an instruction for a European water framework directive compatible hydromorphological mapping of streams (Lebensministerium 2006). As presented before this allows a consistent and representative exposition of the hydromorphological situation of creeks and brooks (Stadler 2009). Picturing the channel's naturalness is the main parameter, other value was set on typical riverbed structures and torrent control buildings. In order to allow an efficient field work a clearly arranged mapping-schedule was developed. With this schedule a consistent and representative mapping out of the brook's characteristic is possible. Due to the steep and overgrown valleys of the National Park interpretation of remote sensing material is not suitable. Therefore fieldwork becomes the most important basis for data acquisition. Detailed hydromorphological parameters are marked in a schedule for every 500 meter intercept of the stream. In order to manage the recorded field data, a database was designed which handles not only the parameters of every scheduled intercept, but also gives an overview of all mapped brooks in the National Park area. Focus was set on the possibility to display point data (torrent control buildings) on the one side and integrated hydromorphological parameters (grade of naturalness) on the other side. With the developed MS Access database an administration was aimed which can be used not only for the running hydromorphological survey, but also for other stream linked surveys (e

  11. PEP725 Pan European Phenological Database

    NASA Astrophysics Data System (ADS)

    Koch, E.; Lipa, W.; Ungersböck, M.; Zach-Hermann, S.

    2012-04-01

    PEP725 is a 5 years project with the main object to promote and facilitate phenological research by delivering a pan European phenological database with an open, unrestricted data access for science, research and education. PEP725 is funded by EUMETNET (the network of European meteorological services), ZAMG and the Austrian ministry for science & research bm:w_f. So far 16 European national meteorological services and 7 partners from different nati-onal phenological network operators have joined PEP725. The data access is very easy via web-access from the homepage www.pep725.eu. Ha-ving accepted the PEP725 data policy and registry the data download can be done by different criteria as for instance the selection of a specific plant or all data from one country. At present more than 300 000 new records are available in the PEP725 data-base coming from 31 European countries and from 8150 stations. For some more sta-tions (154) META data (location and data holder) are provided. Links to the network operators and data owners are also on the webpage in case you have more sophisticated questions about the data. Another objective of PEP725 is to bring together network-operators and scientists by organizing workshops. In April 2012 the second of these workshops will take place on the premises of ZAMG. Invited speakers will give presentations spanning the whole study area of phenology starting from observations to modelling. Quality checking is also a big issue. At the moment we study the literature to find ap-propriate methods.

  12. Development of a national, dynamic reservoir-sedimentation database

    USGS Publications Warehouse

    Gray, J.R.; Bernard, J.M.; Stewart, D.W.; McFaul, E.J.; Laurent, K.W.; Schwarz, G.E.; Stinson, J.T.; Jonas, M.M.; Randle, T.J.; Webb, J.W.

    2010-01-01

    The importance of dependable, long-term water supplies, coupled with the need to quantify rates of capacity loss of the Nation’s re servoirs due to sediment deposition, were the most compelling reasons for developing the REServoir- SEDimentation survey information (RESSED) database and website. Created under the auspices of the Advisory Committee on Water Information’s Subcommittee on Sedimenta ion by the U.S. Geological Survey and the Natural Resources Conservation Service, the RESSED database is the most comprehensive compilation of data from reservoir bathymetric and dry-basin surveys in the United States. As of March 2010, the database, which contains data compiled on the 1950s vintage Soil Conservation Service’s Form SCS-34 data sheets, contained results from 6,616 surveys on 1,823 reservoirs in the United States and two surveys on one reservoir in Puerto Rico. The data span the period 1755–1997, with 95 percent of the surveys performed from 1930–1990. The reservoir surface areas range from sub-hectare-scale farm ponds to 658 km2 Lake Powell. The data in the RESSED database can be useful for a number of purposes, including calculating changes in reservoir-storage characteristics, quantifying sediment budgets, and estimating erosion rates in a reservoir’s watershed. The March 2010 version of the RESSED database has a number of deficiencies, including a cryptic and out-of-date database architecture; some geospatial inaccuracies (although most have been corrected); other data errors; an inability to store all data in a readily retrievable manner; and an inability to store all data types that currently exist. Perhaps most importantly, the March 2010 version of RESSED database provides no publically available means to submit new data and corrections to existing data. To address these and other deficiencies, the Subcommittee on Sedimentation, through the U.S. Geological Survey and the U.S. Army Corps of Engineers, began a collaborative project in

  13. Development of the Permian Basin beam pump failure database

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammed Mahbubur

    Artificial Lift Energy Optimization Consortium (ALEOC) was formed by eleven oil companies operating in the Permian Basin with the primary goal of improving oil field operations through sharing experiences. Beam pumping system received special attention because it is the most widely used artificial lift method in the Permian Basin as well as in the world. The combined effort to optimize beam pumping system calls for the creation of a central database, which will hold beam pump related data from diverse sources and will offer ways to analyze the data to obtain valuable insight about the nature, magnitude and trend of beam pump failure. The database mentioned above has been created as part of this work. The database combines beam pump failure data from about 25,000 wells owned by different companies into a single, uniform and consistent format. Moreover, two front-end computer applications have been developed to interact with the database, to run queries, and to make plots form the query results. One application is designed for desktop, while the other one is designed for the Internet. Both applications calculate failure frequencies of pump, rod, and tubing, and summarize the results in various ways. Thus the database and the front-end applications together provide a powerful means for analyzing beam pump failure data. Much useful information can be gathered from the database, such as the most vulnerable component in the system, the best and the worst performers, and the most troublesome operating area. Such information can be used for benchmarking performance, identifying best design/operational practices, design modification, and long term production planning. Results from data analysis show that the pump has the highest probability to fail in a beam pumping system, followed by the rod string and the tubing string. The overall failure in the Permian Basin shows a general decline with time.

  14. Design Concept For Database Building

    NASA Astrophysics Data System (ADS)

    Faust, Nickolas L.

    1988-08-01

    One of the most challenging issues in today's world of geographic analysis and scene simulation is not the technology for analyzing or displaying the geographic data, but instead, the technology for deriving the databases that would support such functions in various regions of the world where detailed source material may not exist. The processing of spatial data has become commonplace due to the existence of low cost computer systems and the availability of spatial analysis software. Whereas, one was once only able to find true geographic analysis in research institutes and large architecture/engineering firms, now commercially available Geographic Information Systems (GIS) are being used by cities and local government, small business planning firms, state and regional government, and throughout a myriad of federal government entities including the military. College coursework in a number of disciplines now involves the modeling or analyses of spatial data on small computer systems.

  15. CD-ROM-aided Databases

    NASA Astrophysics Data System (ADS)

    Nagatsuka, Takashi

    This paper introduces the CD-ROM-aided products and their utilization in foreign countries, mainly in U.S.A. CD-ROM is being used in various fields recently. Author classified its products into four groups:1. CD-ROM that substitutes for printed matters such as encyclopedias and dictionaries (ex. Grolier's Electronic Encyclopedia), 2. CD-ROM that substitutes for online databases (ex. Disclosure, Medline), 3. CD-ROM that has some functions such as giving orders for books besides information retrieval (ex. Books in Print Plus), 4. CD-ROM that contains literatures including pictures and figures (ex. ADONIS). The future trends of CD-ROM utilization are also suggested.

  16. Mining SNPs from EST databases.

    PubMed

    Picoult-Newberg, L; Ideker, T E; Pohl, M G; Taylor, S L; Donaldson, M A; Nickerson, D A; Boyce-Jacino, M

    1999-02-01

    There is considerable interest in the discovery and characterization of single nucleotide polymorphisms (SNPs) to enable the analysis of the potential relationships between human genotype and phenotype. Here we present a strategy that permits the rapid discovery of SNPs from publicly available expressed sequence tag (EST) databases. From a set of ESTs derived from 19 different cDNA libraries, we assembled 300,000 distinct sequences and identified 850 mismatches from contiguous EST data sets (candidate SNP sites), without de novo sequencing. Through a polymerase-mediated, single-base, primer extension technique, Genetic Bit Analysis (GBA), we confirmed the presence of a subset of these candidate SNP sites and have estimated the allele frequencies in three human populations with different ethnic origins. Altogether, our approach provides a basis for rapid and efficient regional and genome-wide SNP discovery using data assembled from sequences from different libraries of cDNAs.

  17. Publist - a bibliographic database utility

    SciTech Connect

    Peierls, R.F.

    1997-01-01

    A few years ago, the Department of Applied Science perceived a need to automate activities related to publications by using some computer based system. Among the objectives were that: (1) it should be easy for a secretary or someone without extensive computer skills to use the system; (2) it should run on PCs (at that time DOS based), Macintosh, and Unix systems, so that different groups or individual investigators could use it on their platform of choice; (3) it should be flexible enough to track evolving views of what information was needed; (4) it should be able to generate output in different formats for different purposes; (5) the information should be able to be selected from and sorted by a wide variety of keys; and (6) individual items should be able to be updated with new information or deleted. This document gives an over view of the PUBLIST database for handling bibliographic data.

  18. The Majorana Parts Tracking Database

    SciTech Connect

    Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O׳Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.; Zhitnikov, I.

    2015-04-01

    The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radiopurity required for this rare decay search.

  19. QDB: Validated Plasma Chemistries Database

    NASA Astrophysics Data System (ADS)

    Rahimi, Sara; Hamilton, James; Hill, Christian; Tennyson, Jonathan; UCL Team

    2016-09-01

    One of most challenging recurring problems when modelling plasmas is the lack of data. This lack of complete and validated datasets hinders research on plasma processes and curbs development of industrial Applications. We will describe the QDB project which aims to fill this missing link by provide a platform for exchange and validation of chemistry datasets. The database will collate published data on both electron scattering and heavy particle reactions and also facilitates and encourages peer-to-peer data sharing by its users. This data platform is rigorously supported by the validation methodical validation of the datasetsan automated chemistry generator employed; this methodology identifies missing reactions in chemistries which although important are currently unreported in the literature and employs mathematical methods to analyze the importance of these chemistries. Gaps in the datasets are filled using in house theoretical methods.

  20. Catalog of databases and reports

    SciTech Connect

    Burtis, M.D.

    1997-04-01

    This catalog provides information about the many reports and materials made available by the US Department of Energy`s (DOE`s) Global Change Research Program (GCRP) and the Carbon Dioxide Information Analysis Center (CDIAC). The catalog is divided into nine sections plus the author and title indexes: Section A--US Department of Energy Global Change Research Program Research Plans and Summaries; Section B--US Department of Energy Global Change Research Program Technical Reports; Section C--US Department of Energy Atmospheric Radiation Measurement (ARM) Program Reports; Section D--Other US Department of Energy Reports; Section E--CDIAC Reports; Section F--CDIAC Numeric Data and Computer Model Distribution; Section G--Other Databases Distributed by CDIAC; Section H--US Department of Agriculture Reports on Response of Vegetation to Carbon Dioxide; and Section I--Other Publications.

  1. Analyzing and mining image databases.

    PubMed

    Berlage, Thomas

    2005-06-01

    Image mining is the application of computer-based techniques that extract and exploit information from large image sets to support human users in generating knowledge from these sources. This review focuses on biomedical applications, in particular automated imaging at the cellular level. An image database is an interactive software application that combines data management, image analysis and visual data mining. The main characteristic of such a system is a layer that represents objects within an image, and that represents a large spectrum of quantitative and semantic object features. The image analysis needs to be adapted to each particular experiment, so 'end-user programming' will be desirable to make the technology more widely applicable.

  2. Sandia Wind Turbine Loads Database

    DOE Data Explorer

    The Sandia Wind Turbine Loads Database is divided into six files, each corresponding to approximately 16 years of simulation. The files are text files with data in columnar format. The 424MB zipped file containing six data files can be downloaded by the public. The files simulate 10-minute maximum loads for the NREL 5MW wind turbine. The details of the loads simulations can be found in the paper: “Decades of Wind Turbine Loads Simulations”, M. Barone, J. Paquette, B. Resor, and L. Manuel, AIAA2012-1288 (3.69MB PDF). Note that the site-average wind speed is 10 m/s (class I-B), not the 8.5 m/s reported in the paper.

  3. The Pfam protein families database.

    PubMed

    Finn, Robert D; Tate, John; Mistry, Jaina; Coggill, Penny C; Sammut, Stephen John; Hotz, Hans-Rudolf; Ceric, Goran; Forslund, Kristoffer; Eddy, Sean R; Sonnhammer, Erik L L; Bateman, Alex

    2008-01-01

    Pfam is a comprehensive collection of protein domains and families, represented as multiple sequence alignments and as profile hidden Markov models. The current release of Pfam (22.0) contains 9318 protein families. Pfam is now based not only on the UniProtKB sequence database, but also on NCBI GenPept and on sequences from selected metagenomics projects. Pfam is available on the web from the consortium members using a new, consistent and improved website design in the UK (http://pfam.sanger.ac.uk/), the USA (http://pfam.janelia.org/) and Sweden (http://pfam.sbc.su.se/), as well as from mirror sites in France (http://pfam.jouy.inra.fr/) and South Korea (http://pfam.ccbb.re.kr/).

  4. The NIST Quantitative Infrared Database

    PubMed Central

    Chu, P. M.; Guenther, F. R.; Rhoderick, G. C.; Lafferty, W. J.

    1999-01-01

    With the recent developments in Fourier transform infrared (FTIR) spectrometers it is becoming more feasible to place these instruments in field environments. As a result, there has been enormous increase in the use of FTIR techniques for a variety of qualitative and quantitative chemical measurements. These methods offer the possibility of fully automated real-time quantitation of many analytes; therefore FTIR has great potential as an analytical tool. Recently, the U.S. Environmental Protection Agency (U.S.EPA) has developed protocol methods for emissions monitoring using both extractive and open-path FTIR measurements. Depending upon the analyte, the experimental conditions and the analyte matrix, approximately 100 of the hazardous air pollutants (HAPs) listed in the 1990 U.S.EPA Clean Air Act amendment (CAAA) can be measured. The National Institute of Standards and Technology (NIST) has initiated a program to provide quality-assured infrared absorption coefficient data based on NIST prepared primary gas standards. Currently, absorption coefficient data has been acquired for approximately 20 of the HAPs. For each compound, the absorption coefficient spectrum was calculated using nine transmittance spectra at 0.12 cm−1 resolution and the Beer’s law relationship. The uncertainties in the absorption coefficient data were estimated from the linear regressions of the transmittance data and considerations of other error sources such as the nonlinear detector response. For absorption coefficient values greater than 1 × 10−4 μmol/mol)−1 m−1 the average relative expanded uncertainty is 2.2 %. This quantitative infrared database is currently an ongoing project at NIST. Additional spectra will be added to the database as they are acquired. Our current plans include continued data acquisition of the compounds listed in the CAAA, as well as the compounds that contribute to global warming and ozone depletion.

  5. The Phys4Entry database

    NASA Astrophysics Data System (ADS)

    Laricchiuta, Annarita

    2012-10-01

    The Phys4Entry DB is a database of state-selected dynamical information for elementary processes relevant to the state-to-state kinetic modeling of planetary-atmosphere entry conditions. The DB is intended to the challenging goal of complementing the information in the existing web-access databases, collecting and validating data of collisional dynamics of elementary processes involving ground and excited chemical species, with resolution on the electronic, vibrational and rotational degrees of freedom. Four relevant classes of elementary processes are considered, i.e. electron-molecule collisions, atom/molecule-molecule collisions, atom/molecule surface interaction and photon-induced processes, constructing a taxonomy for process classification. Data populating the DB are largely originated by the coordinated research activity done in the frame of the Phys4Entry FP7 project, considering different theoretical approaches from quantum to semi-classical or quasi-classical molecular dynamics. Nevertheless the results, obtained in the Bari plasma chemistry labs in years of research devoted to the construction of reliable state-to-state kinetic models for hydrogen and air plasmas, are also transferred to the DB. Two DB interfaces have been created for different roles allowed to different actions: the contributor, uploading new processes, and the inquirer, submitting queries, to access the complete information about the records, through a graphical tool, displaying energy or roto-vibrational dependence of dynamical data, or through the export action to download ascii datafiles. The DB is expected to have a significant impact on the modeling community working also in scientific fields different from the aerothermodynamics (i.e. fusion, environment, ), making practicable the state-to-state approach.

  6. Database integration in a multimedia-modeling environment

    SciTech Connect

    Dorow, Kevin E.

    2002-09-02

    Integration of data from disparate remote sources has direct applicability to modeling, which can support Brownfield assessments. To accomplish this task, a data integration framework needs to be established. A key element in this framework is the metadata that creates the relationship between the pieces of information that are important in the multimedia modeling environment and the information that is stored in the remote data source. The design philosophy is to allow modelers and database owners to collaborate by defining this metadata in such a way that allows interaction between their components. The main parts of this framework include tools to facilitate metadata definition, database extraction plan creation, automated extraction plan execution / data retrieval, and a central clearing house for metadata and modeling / database resources. Cross-platform compatibility (using Java) and standard communications protocols (http / https) allow these parts to run in a wide variety of computing environments (Local Area Networks, Internet, etc.), and, therefore, this framework provides many benefits. Because of the specific data relationships described in the metadata, the amount of data that have to be transferred is kept to a minimum (only the data that fulfill a specific request are provided as opposed to transferring the complete contents of a data source). This allows for real-time data extraction from the actual source. Also, the framework sets up collaborative responsibilities such that the different types of participants have control over the areas in which they have domain knowledge-the modelers are responsible for defining the data relevant to their models, while the database owners are responsible for mapping the contents of the database using the metadata definitions. Finally, the data extraction mechanism allows for the ability to control access to the data and what data are made available.

  7. A comprehensive database of Martian landslides

    NASA Astrophysics Data System (ADS)

    Battista Crosta, Giovanni; Vittorio De Blasio, Fabio; Frattini, Paolo; Valbuzzi, Elena

    2016-04-01

    During a long-term project, we have identified and classified a large number (> 3000) of Martian landslides especially but not exclusively from Valles Marineris. This database provides a more complete basis for a statistical study of landslides on Mars and its relationship with geographical and environmental conditions. Landslides have been mapped according to standard geomorphological criteria, delineating both the landslide scar and accumulation limits, associating each scarp to a deposit, and using the program ArcGis for generation of a complete digital dataset. Multiple accumulations from the same source area or from different sources have been differentiated, where possible, to obtain a more complete dataset and to allow more refined analyses. Each landslide has been classified according to a set of criteria including: type, degree of confinement, possible trigger, elevation with respect to datum, geomorphological features, degree of multiplicity, and so on. The runout, fall height, and volume have been measured for each deposit. In fact, the database is revealing a series of trends that may assist at understanding landform processes on Mars and its past climatic conditions. One of the most interesting aspects of our dataset is the presence of a population of landslides whose particularly long mobility deviates from average behavior. While some landslides have travelled unimpeded on a usually flat area, others have travelled against obstacles or mounds. Therefore, landslides are also studied in relation to i) morphologies created by the landslide itself, ii) presence of mounds, barriers or elevations than have affected the movement of the landslide mass. In some extreme cases, the landslide was capable of travelling for several tens of km along the whole valley and upon reaching the opposite side it travelled upslope for several hundreds of meters, which is indication of high travelling speed. In other cases, the high speed is revealed by dynamic deformations

  8. Heterogeneous distributed databases: A case study

    NASA Technical Reports Server (NTRS)

    Stewart, Tracy R.; Mukkamala, Ravi

    1991-01-01

    Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.

  9. IPD--the Immuno Polymorphism Database.

    PubMed

    Robinson, James; Mistry, Kavita; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G E

    2010-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of Killer-cell Immunoglobulin-like Receptors, IPD-MHC, is a database of sequences of the Major Histocompatibility Complex of different species; IPD-human platelet antigens, alloantigens expressed only on platelets and IPD-ESTDAB, which provides access to the European Searchable Tumour cell-line database, a cell bank of immunologically characterised melanoma cell lines. The data is currently available online from the website and ftp directory.

  10. IPD--the Immuno Polymorphism Database.

    PubMed

    Robinson, James; Halliwell, Jason A; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G E

    2013-01-01

    The Immuno Polymorphism Database (IPD), http://www.ebi.ac.uk/ipd/ is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer-cell immunoglobulin-like receptors, IPD-MHC, a database of sequences of the major histocompatibility complex of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTDAB, which provides access to the European Searchable Tumour Cell-Line Database, a cell bank of immunologically characterized melanoma cell lines. The data is currently available online from the website and FTP directory. This article describes the latest updates and additional tools added to the IPD project.

  11. BIOSPIDA: A Relational Database Translator for NCBI.

    PubMed

    Hagen, Matthew S; Lee, Eva K

    2010-11-13

    As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.

  12. Hydrologic database user`s manual

    SciTech Connect

    Champman, J.B.; Gray, K.J.; Thompson, C.B.

    1993-09-01

    The Hydrologic Database is an electronic filing cabinet containing water-related data for the Nevada Test Site (NTS). The purpose of the database is to enhance research on hydrologic issues at the NTS by providing efficient access to information gathered by a variety of scientists. Data are often generated for specific projects and are reported to DOE in the context of specific project goals. The originators of the database recognized that much of this information has a general value that transcends project-specific requirements. Allowing researchers access to information generated by a wide variety of projects can prevent needless duplication of data-gathering efforts and can augment new data collection and interpretation. In addition, collecting this information in the database ensures that the results are not lost at the end of discrete projects as long as the database is actively maintained. This document is a guide to using the database.

  13. Idaho Chemical Processing Plant failure rate database

    SciTech Connect

    Alber, T.G.; Hunt, C.R.; Fogarty, S.P.; Wilson, J.R.

    1995-08-01

    This report represents the first major upgrade to the Idaho Chemical Processing Plant (ICPP) Failure Rate Database. This upgrade incorporates additional site-specific and generic data while improving on the previous data reduction techniques. In addition, due to a change in mission at the ICPP, the status of certain equipment items has changed from operating to standby or off-line. A discussion of how this mission change influenced the relevance of failure data also has been included. This report contains two data sources: the ICPP Failure Rate Database and a generic failure rate database. A discussion is presented on the approaches and assumptions used to develop the data in the ICPP Failure Rate Database. The generic database is included along with a short discussion of its application. A brief discussion of future projects recommended to strengthen and lend credibility to the ICPP Failure Rate Database also is included.

  14. IPD—the Immuno Polymorphism Database

    PubMed Central

    Robinson, James; Mistry, Kavita; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G. E.

    2010-01-01

    The Immuno Polymorphism Database (IPD) (http://www.ebi.ac.uk/ipd/) is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of Killer-cell Immunoglobulin-like Receptors, IPD-MHC, is a database of sequences of the Major Histocompatibility Complex of different species; IPD-human platelet antigens, alloantigens expressed only on platelets and IPD-ESTDAB, which provides access to the European Searchable Tumour cell-line database, a cell bank of immunologically characterised melanoma cell lines. The data is currently available online from the website and ftp directory. PMID:19875415

  15. Nuclear Concrete Materials Database Phase I Development

    SciTech Connect

    Ren, Weiju; Naus, Dan J

    2012-05-01

    The FY 2011 accomplishments in Phase I development of the Nuclear Concrete Materials Database to support the Light Water Reactor Sustainability Program are summarized. The database has been developed using the ORNL materials database infrastructure established for the Gen IV Materials Handbook to achieve cost reduction and development efficiency. In this Phase I development, the database has been successfully designed and constructed to manage documents in the Portable Document Format generated from the Structural Materials Handbook that contains nuclear concrete materials data and related information. The completion of the Phase I database has established a solid foundation for Phase II development, in which a digital database will be designed and constructed to manage nuclear concrete materials data in various digitized formats to facilitate electronic and mathematical processing for analysis, modeling, and design applications.

  16. Mars Global Digital Dune Database: MC2-MC29

    USGS Publications Warehouse

    Hayward, Rosalyn K.; Mullins, Kevin F.; Fenton, L.K.; Hare, T.M.; Titus, T.N.; Bourke, M.C.; Colaprete, Anthony; Christensen, P.R.

    2007-01-01

    Introduction The Mars Global Digital Dune Database presents data and describes the methodology used in creating the database. The database provides a comprehensive and quantitative view of the geographic distribution of moderate- to large-size dune fields from 65? N to 65? S latitude and encompasses ~ 550 dune fields. The database will be expanded to cover the entire planet in later versions. Although we have attempted to include all dune fields between 65? N and 65? S, some have likely been excluded for two reasons: 1) incomplete THEMIS IR (daytime) coverage may have caused us to exclude some moderate- to large-size dune fields or 2) resolution of THEMIS IR coverage (100m/pixel) certainly caused us to exclude smaller dune fields. The smallest dune fields in the database are ~ 1 km2 in area. While the moderate to large dune fields are likely to constitute the largest compilation of sediment on the planet, smaller stores of sediment of dunes are likely to be found elsewhere via higher resolution data. Thus, it should be noted that our database excludes all small dune fields and some moderate to large dune fields as well. Therefore the absence of mapped dune fields does not mean that such dune fields do not exist and is not intended to imply a lack of saltating sand in other areas. Where availability and quality of THEMIS visible (VIS) or Mars Orbiter Camera narrow angle (MOC NA) images allowed, we classifed dunes and included dune slipface measurements, which were derived from gross dune morphology and represent the prevailing wind direction at the last time of significant dune modification. For dunes located within craters, the azimuth from crater centroid to dune field centroid was calculated. Output from a general circulation model (GCM) is also included. In addition to polygons locating dune fields, the database includes over 1800 selected Thermal Emission Imaging System (THEMIS) infrared (IR), THEMIS visible (VIS) and Mars Orbiter Camera Narrow Angle (MOC NA

  17. [EDAS, databases of alternatively spliced human genes].

    PubMed

    Nurtdinov, R N; Neverov, A D; Mal'ko, D B; Kosmodem'ianskiĭ, I A; Ermakova, E O; Ramenskiĭ, V E; Mironov, A A; Gel'fand, M S

    2006-01-01

    EDAS, a database of alternatively spliced human genes, contains data on the alignment of proteins, mRNAs, and EST. It contains information on all exons and introns observed, as well as elementary alternatives formed from them. The database makes it possible to filter the output data by changing the cut-off threshold by the significance level. The database is accessible at http://www.gene-bee.msu.ru/edas/.

  18. The NCBI BioSystems database.

    PubMed

    Geer, Lewis Y; Marchler-Bauer, Aron; Geer, Renata C; Han, Lianyi; He, Jane; He, Siqian; Liu, Chunlei; Shi, Wenyao; Bryant, Stephen H

    2010-01-01

    The NCBI BioSystems database, found at http://www.ncbi.nlm.nih.gov/biosystems/, centralizes and cross-links existing biological systems databases, increasing their utility and target audience by integrating their pathways and systems into NCBI resources. This integration allows users of NCBI's Entrez databases to quickly categorize proteins, genes and small molecules by metabolic pathway, disease state or other BioSystem type, without requiring time-consuming inference of biological relationships from the literature or multiple experimental datasets.

  19. An automated system for terrain database construction

    NASA Technical Reports Server (NTRS)

    Johnson, L. F.; Fretz, R. K.; Logan, T. L.; Bryant, N. A.

    1987-01-01

    An automated Terrain Database Preparation System (TDPS) for the construction and editing of terrain databases used in computerized wargaming simulation exercises has been developed. The TDPS system operates under the TAE executive, and it integrates VICAR/IBIS image processing and Geographic Information System software with CAD/CAM data capture and editing capabilities. The terrain database includes such features as roads, rivers, vegetation, and terrain roughness.

  20. Spectroscopic data for an astronomy database

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, Peter L.

    1995-01-01

    Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.

  1. Integrating Variances into an Analytical Database

    NASA Technical Reports Server (NTRS)

    Sanchez, Carlos

    2010-01-01

    For this project, I enrolled in numerous SATERN courses that taught the basics of database programming. These include: Basic Access 2007 Forms, Introduction to Database Systems, Overview of Database Design, and others. My main job was to create an analytical database that can handle many stored forms and make it easy to interpret and organize. Additionally, I helped improve an existing database and populate it with information. These databases were designed to be used with data from Safety Variances and DCR forms. The research consisted of analyzing the database and comparing the data to find out which entries were repeated the most. If an entry happened to be repeated several times in the database, that would mean that the rule or requirement targeted by that variance has been bypassed many times already and so the requirement may not really be needed, but rather should be changed to allow the variance's conditions permanently. This project did not only restrict itself to the design and development of the database system, but also worked on exporting the data from the database to a different format (e.g. Excel or Word) so it could be analyzed in a simpler fashion. Thanks to the change in format, the data was organized in a spreadsheet that made it possible to sort the data by categories or types and helped speed up searches. Once my work with the database was done, the records of variances could be arranged so that they were displayed in numerical order, or one could search for a specific document targeted by the variances and restrict the search to only include variances that modified a specific requirement. A great part that contributed to my learning was SATERN, NASA's resource for education. Thanks to the SATERN online courses I took over the summer, I was able to learn many new things about computers and databases and also go more in depth into topics I already knew about.

  2. A database of Caenorhabditis elegans behavioral phenotypes.

    PubMed

    Yemini, Eviatar; Jucikas, Tadas; Grundy, Laura J; Brown, André E X; Schafer, William R

    2013-09-01

    Using low-cost automated tracking microscopes, we have generated a behavioral database for 305 Caenorhabditis elegans strains, including 76 mutants with no previously described phenotype. The growing database currently consists of 9,203 short videos segmented to extract behavior and morphology features, and these videos and feature data are available online for further analysis. The database also includes summary statistics for 702 measures with statistical comparisons to wild-type controls so that phenotypes can be identified and understood by users.

  3. Optics Toolbox: An Intelligent Relational Database System For Optical Designers

    NASA Astrophysics Data System (ADS)

    Weller, Scott W.; Hopkins, Robert E.

    1986-12-01

    Optical designers were among the first to use the computer as an engineering tool. Powerful programs have been written to do ray-trace analysis, third-order layout, and optimization. However, newer computing techniques such as database management and expert systems have not been adopted by the optical design community. For the purpose of this discussion we will define a relational database system as a database which allows the user to specify his requirements using logical relations. For example, to search for all lenses in a lens database with a F/number less than two, and a half field of view near 28 degrees, you might enter the following: FNO < 2.0 and FOV of 28 degrees ± 5% Again for the purpose of this discussion, we will define an expert system as a program which contains expert knowledge, can ask intelligent questions, and can form conclusions based on the answers given and the knowledge which it contains. Most expert systems store this knowledge in the form of rules-of-thumb, which are written in an English-like language, and which are easily modified by the user. An example rule is: IF require microscope objective in air and require NA > 0.9 THEN suggest the use of an oil immersion objective The heart of the expert system is the rule interpreter, sometimes called an inference engine, which reads the rules and forms conclusions based on them. The use of a relational database system containing lens prototypes seems to be a viable prospect. However, it is not clear that expert systems have a place in optical design. In domains such as medical diagnosis and petrology, expert systems are flourishing. These domains are quite different from optical design, however, because optical design is a creative process, and the rules are difficult to write down. We do think that an expert system is feasible in the area of first order layout, which is sufficiently diagnostic in nature to permit useful rules to be written. This first-order expert would emulate an expert

  4. Database of Polish arable mineral soils: a review

    NASA Astrophysics Data System (ADS)

    Bieganowski, A.; Witkowska-Walczak, B.; Gliñski, J.; Sokołowska, Z.; Sławiński, C.; Brzezińska, M.; Włodarczyk, T.

    2013-09-01

    The database of Polish arable mineral soils is presented. The database includes a lot of information about the basic properties of soils and their dynamic characteristics. It was elaborated for about 1 000 representative profiles of soils in Poland The database concerns: particle size distribution, organic carbon content, acidity-pH, specific surface area, hydrophobicity - solidliquid contact angle, static and dynamic hydrophysical properties, oxidation-reduction properties and selected biological (microbiological) properties of soils. Knowledge about soil characteristics is indispensable for description, interpretation and prediction of the course of physical, chemical and biological processes, and modelling these processes requires representative data. The utility of simulation and prediction models describing phenomena which take place in the soil-plant-atmosphere system greatly depends on the precision of data concerning characteristics of soil. On the basis of this database, maps of chosen soil properties are constructed. The aim of maps is to provide specialists in agriculture, ecology, and environment protection with an opportunity to gain knowledge of soil properties and their spatial and seasonal variability.

  5. LocSigDB: a database of protein localization signals.

    PubMed

    Negi, Simarjeet; Pandey, Sanjit; Srinivasan, Satish M; Mohammed, Akram; Guda, Chittibabu

    2015-01-01

    LocSigDB (http://genome.unmc.edu/LocSigDB/) is a manually curated database of experimental protein localization signals for eight distinct subcellular locations; primarily in a eukaryotic cell with brief coverage of bacterial proteins. Proteins must be localized at their appropriate subcellular compartment to perform their desired function. Mislocalization of proteins to unintended locations is a causative factor for many human diseases; therefore, collection of known sorting signals will help support many important areas of biomedical research. By performing an extensive literature study, we compiled a collection of 533 experimentally determined localization signals, along with the proteins that harbor such signals. Each signal in the LocSigDB is annotated with its localization, source, PubMed references and is linked to the proteins in UniProt database along with the organism information that contain the same amino acid pattern as the given signal. From LocSigDB webserver, users can download the whole database or browse/search for data using an intuitive query interface. To date, LocSigDB is the most comprehensive compendium of protein localization signals for eight distinct subcellular locations. Database URL: http://genome.unmc.edu/LocSigDB/

  6. Windshear database for forward-looking systems certification

    NASA Technical Reports Server (NTRS)

    Switzer, G. F.; Proctor, F. H.; Hinton, D. A.; Aanstoos, J. V.

    1993-01-01

    This document contains a description of a comprehensive database that is to be used for certification testing of airborne forward-look windshear detection systems. The database was developed by NASA Langley Research Center, at the request of the Federal Aviation Administration (FAA), to support the industry initiative to certify and produce forward-look windshear detection equipment. The database contains high resolution, three dimensional fields for meteorological variables that may be sensed by forward-looking systems. The database is made up of seven case studies which have been generated by the Terminal Area Simulation System, a state-of-the-art numerical system for the realistic modeling of windshear phenomena. The selected cases represent a wide spectrum of windshear events. General descriptions and figures from each of the case studies are included, as well as equations for F-factor, radar-reflectivity factor, and rainfall rate. The document also describes scenarios and paths through the data sets, jointly developed by NASA and the FAA, to meet FAA certification testing objectives. Instructions for reading and verifying the data from tape are included.

  7. An annotated database of Arabidopsis mutants of acyl lipid metabolism

    SciTech Connect

    McGlew, Kathleen; Shaw, Vincent; Zhang, Meng; Kim, Ryeo Jin; Yang, Weili; Shorrosh, Basil; Suh, Mi Chung; Ohlrogge, John

    2014-12-10

    Mutants have played a fundamental role in gene discovery and in understanding the function of genes involved in plant acyl lipid metabolism. The first mutant in Arabidopsis lipid metabolism (fad4) was described in 1985. Since that time, characterization of mutants in more than 280 genes associated with acyl lipid metabolism has been reported. This review provides a brief background and history on identification of mutants in acyl lipid metabolism, an analysis of the distribution of mutants in different areas of acyl lipid metabolism and presents an annotated database (ARALIPmutantDB) of these mutants. The database provides information on the phenotypes of mutants, pathways and enzymes/proteins associated with the mutants, and allows rapid access via hyperlinks to summaries of information about each mutant and to literature that provides information on the lipid composition of the mutants. Mutants for at least 30 % of the genes in the database have multiple names, which have been compiled here to reduce ambiguities in searches for information. Furthermore, the database should also provide a tool for exploring the relationships between mutants in acyl lipid-related genes and their lipid phenotypes and point to opportunities for further research.

  8. An annotated database of Arabidopsis mutants of acyl lipid metabolism

    DOE PAGES

    McGlew, Kathleen; Shaw, Vincent; Zhang, Meng; ...

    2014-12-10

    Mutants have played a fundamental role in gene discovery and in understanding the function of genes involved in plant acyl lipid metabolism. The first mutant in Arabidopsis lipid metabolism (fad4) was described in 1985. Since that time, characterization of mutants in more than 280 genes associated with acyl lipid metabolism has been reported. This review provides a brief background and history on identification of mutants in acyl lipid metabolism, an analysis of the distribution of mutants in different areas of acyl lipid metabolism and presents an annotated database (ARALIPmutantDB) of these mutants. The database provides information on the phenotypes ofmore » mutants, pathways and enzymes/proteins associated with the mutants, and allows rapid access via hyperlinks to summaries of information about each mutant and to literature that provides information on the lipid composition of the mutants. Mutants for at least 30 % of the genes in the database have multiple names, which have been compiled here to reduce ambiguities in searches for information. Furthermore, the database should also provide a tool for exploring the relationships between mutants in acyl lipid-related genes and their lipid phenotypes and point to opportunities for further research.« less

  9. Handling of network and database instabilities in CORAL

    NASA Astrophysics Data System (ADS)

    Trentadue, R.; Valassi, A.; Kalkhof, A.

    2012-12-01

    The CORAL software is widely used by the LHC experiments for storing and accessing data using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several back-ends and deployment models, direct client access to Oracle servers being one of the most important use cases. Since 2010, several problems have been reported by the LHC experiments in their use of Oracle through CORAL, involving application errors, hangs or crashes after the network or the database servers became temporarily unavailable. CORAL already provided some level of handling of these instabilities, which are due to external causes and cannot be avoided, but this proved to be insufficient in some cases and to be itself the cause of other problems, such as the hangs and crashes mentioned before, in other cases. As a consequence, a major redesign of the CORAL plugins has been implemented, with the aim of making the software more robust against these database and network glitches. The new implementation ensures that CORAL automatically reconnects to Oracle databases in a transparent way whenever possible and gently terminates the application when this is not possible. Internally, this is done by resetting all relevant parameters of the underlying back-end technology (OCI, the Oracle Call Interface). This presentation reports on the status of this work at the time of the CHEP2012 conference, covering the design and implementation of these new features and the outlook for future developments in this area.

  10. Database Relation Watermarking Resilient against Secondary Watermarking Attacks

    NASA Astrophysics Data System (ADS)

    Gupta, Gaurav; Pieprzyk, Josef

    There has been tremendous interest in watermarking multimedia content during the past two decades, mainly for proving ownership and detecting tamper. Digital fingerprinting, that deals with identifying malicious user(s), has also received significant attention. While extensive work has been carried out in watermarking of images, other multimedia objects still have enormous research potential. Watermarking database relations is one of the several areas which demand research focus owing to the commercial implications of database theft. Recently, there has been little progress in database watermarking, with most of the watermarking schemes modeled after the irreversible database watermarking scheme proposed by Agrawal and Kiernan. Reversibility is the ability to re-generate the original (unmarked) relation from the watermarked relation using a secret key. As explained in our paper, reversible watermarking schemes provide greater security against secondary watermarking attacks, where an attacker watermarks an already marked relation in an attempt to erase the original watermark. This paper proposes an improvement over the reversible and blind watermarking scheme presented in [5], identifying and eliminating a critical problem with the previous model. Experiments showing that the average watermark detection rate is around 91% even with attacker distorting half of the attributes. The current scheme provides security against secondary watermarking attacks.

  11. The LBNL Water Heater Retail Price Database

    SciTech Connect

    Lekov, Alex; Glover, Julie; Lutz, Jim

    2000-10-01

    Lawrence Berkeley National Laboratory developed the LBNL Water Heater Price Database to compile and organize information used in the revision of U.S. energy efficiency standards for water heaters. The Database contains all major components that contribute to the consumer cost of water heaters, including basic retail prices, sales taxes, installation costs, and any associated fees. In addition, the Database provides manufacturing data on the features and design characteristics of more than 1100 different water heater models. Data contained in the Database was collected over a two-year period from 1997 to 1999.

  12. DEPOT database: Reference manual and user's guide

    SciTech Connect

    Clancey, P.; Logg, C.

    1991-03-01

    DEPOT has been developed to provide tracking for the Stanford Linear Collider (SLC) control system equipment. For each piece of equipment entered into the database, complete location, service, maintenance, modification, certification, and radiation exposure histories can be maintained. To facilitate data entry accuracy, efficiency, and consistency, barcoding technology has been used extensively. DEPOT has been an important tool in improving the reliability of the microsystems controlling SLC. This document describes the components of the DEPOT database, the elements in the database records, and the use of the supporting programs for entering data, searching the database, and producing reports from the information.

  13. TWRS technical baseline database manager definition document

    SciTech Connect

    Acree, C.D.

    1997-08-13

    This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.

  14. DBGC: A Database of Human Gastric Cancer

    PubMed Central

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do PMID:26566288

  15. Database Search Engines: Paradigms, Challenges and Solutions.

    PubMed

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  16. DBGC: A Database of Human Gastric Cancer.

    PubMed

    Wang, Chao; Zhang, Jun; Cai, Mingdeng; Zhu, Zhenggang; Gu, Wenjie; Yu, Yingyan; Zhang, Xiaoyan

    2015-01-01

    The Database of Human Gastric Cancer (DBGC) is a comprehensive database that integrates various human gastric cancer-related data resources. Human gastric cancer-related transcriptomics projects, proteomics projects, mutations, biomarkers and drug-sensitive genes from different sources were collected and unified in this database. Moreover, epidemiological statistics of gastric cancer patients in China and clinicopathological information annotated with gastric cancer cases were also integrated into the DBGC. We believe that this database will greatly facilitate research regarding human gastric cancer in many fields. DBGC is freely available at http://bminfor.tongji.edu.cn/dbgc/index.do.

  17. Design considerations for a space database

    NASA Technical Reports Server (NTRS)

    Moss, Lance M.

    1989-01-01

    Part of the information used in a real-time simulator is stored in the visual database. This information is processed by an image generator and displayed as a real-time visual image. The database must be constructed in a specific format, and it should efficiently utilize the capacities of the image generator that is was created for. A visual simulation is crucially dependent upon the success with which the database provides visual cues and recognizable scenes. For this reason, more and more attention is being paid to the art and science of creating effective real-time visual databases. Investigated here are the database design considerations required for a space-oriented real-time simulator. Space applications often require unique designs that correspond closely to the particular image-generator hardware and visual-database-management software. Specific examples from the databases constructed for NASA and its Evans and Sutherland CT6 image generator illustrate the various design strategies used in a space-simulation environment. These database design considerations are essential for all who would create a space database.

  18. Using Large Diabetes Databases for Research.

    PubMed

    Wild, Sarah; Fischbacher, Colin; McKnight, John

    2016-09-01

    There are an increasing number of clinical, administrative and trial databases that can be used for research. These are particularly valuable if there are opportunities for linkage to other databases. This paper describes examples of the use of large diabetes databases for research. It reviews the advantages and disadvantages of using large diabetes databases for research and suggests solutions for some challenges. Large, high-quality databases offer potential sources of information for research at relatively low cost. Fundamental issues for using databases for research are the completeness of capture of cases within the population and time period of interest and accuracy of the diagnosis of diabetes and outcomes of interest. The extent to which people included in the database are representative should be considered if the database is not population based and there is the intention to extrapolate findings to the wider diabetes population. Information on key variables such as date of diagnosis or duration of diabetes may not be available at all, may be inaccurate or may contain a large amount of missing data. Information on key confounding factors is rarely available for the nondiabetic or general population limiting comparisons with the population of people with diabetes. However comparisons that allow for differences in distribution of important demographic factors may be feasible using data for the whole population or a matched cohort study design. In summary, diabetes databases can be used to address important research questions. Understanding the strengths and limitations of this approach is crucial to interpret the findings appropriately.

  19. Database tools in genetic diseases research.

    PubMed

    Bianco, Anna Monica; Marcuzzi, Annalisa; Zanin, Valentina; Girardelli, Martina; Vuch, Josef; Crovella, Sergio

    2013-02-01

    The knowledge of the human genome is in continuous progression: a large number of databases have been developed to make meaningful connections among worldwide scientific discoveries. This paper reviews bioinformatics resources and database tools specialized in disseminating information regarding genetic disorders. The databases described are useful for managing sample sequences, gene expression and post-transcriptional regulation. In relation to data sets available from genome-wide association studies, we describe databases that could be the starting point for developing studies in the field of complex diseases, particularly those in which the causal genes are difficult to identify.

  20. East-China Geochemistry Database (ECGD):A New Networking Database for North China Craton

    NASA Astrophysics Data System (ADS)

    Wang, X.; Ma, W.

    2010-12-01

    North China Craton is one of the best natural laboratories that research some Earth Dynamic questions[1]. Scientists made much progress in research on this area, and got vast geochemistry data, which are essential for answering many fundamental questions about the age, composition, structure, and evolution of the East China area. But the geochemical data have long been accessible only through the scientific literature and theses where they have been widely dispersed, making it difficult for the broad Geosciences community to find, access and efficiently use the full range of available data[2]. How to effectively store, manage, share and reuse the existing geochemical data in the North China Craton area? East-China Geochemistry Database(ECGD) is a networking geochemical scientific database system that has been designed based on WebGIS and relational database for the structured storage and retrieval of geochemical data and geological map information. It is integrated the functions of data retrieval, spatial visualization and online analysis. ECGD focus on three areas: 1.Storage and retrieval of geochemical data and geological map information. Research on the characters of geochemical data, including its composing and connecting of each other, we designed a relational database, which based on geochemical relational data model, to store a variety of geological sample information such as sampling locality, age, sample characteristics, reference, major elements, rare earth elements, trace elements and isotope system et al. And a web-based user-friendly interface is provided for constructing queries. 2.Data view. ECGD is committed to online data visualization by different ways, especially to view data in digital map with dynamic way. Because ECGD was integrated WebGIS technology, the query results can be mapped on digital map, which can be zoomed, translation and dot selection. Besides of view and output query results data by html, txt or xls formats, researchers also can